Note (12/2015): Hi there! I'm taking some time off here to focus on other projects for a bit. As of October 2016, those other projects include a science book series for kids titled Things That Make You Go Yuck! -- available at Barnes and Noble, Amazon and (hopefully) a bookstore near you!

Co-author Jenn Dlugos and I are also doing some extremely ridiculous things over at Drinkstorm Studios, including our award-winning webseries, Magicland.

There are also a full 100 posts right here in the archives, and feel free to drop me a line at with comments, suggestions or wacky cold fusion ideas. Cheers!


· Categories: Computers, Mathematics
What I’ve Learned:


If you look closely enough, even really complicated mathematics breaks down into simple logic. And if you look at simple logic closely enough… well, it’s really freaking complicated.

Take the word “or”. We use “or” between things all the time. Cake or pie. French fries or onion rings. Coffee, tea or milk.

(And also between things that aren’t food, probably. I skipped lunch, so I’ve got a bit of a one-track stomach right now.)

Mostly, our “or”s mean you can have one thing or the other — but that’s not always true. This is the 21st century, and we’re in ‘Murrica, dadgummit, so if you want half fries and half rings, you can have it. Milk in your coffee? No problem. You want a pie baked inside a cake, on top of another cake with a pie in it?

Well, of course you do. Because ‘Murrica.

These ambiguous “or”s are fine in conversation — and in diners, bakeries and burger joints, apparently — but they won’t do when it comes to math and logic. For that, you need something more specific. More restrictive. You need XOR.

XOR — or “exclusive or”, if you like — is a logical operator that denotes the less generous sort of “or”. XOR is “or” with a mean disciplinarian streak. It’s the Ebenezer Scrooge of “or”. The angry ruler-wielding Catholic nun of “or”. And when separating two choices, XOR is the big ugly punk highlander of “or”: THERE CAN BE ONLY ONE!

In logical terms, a pairwise XOR represents the choice of “A or B, but not A and B”. But this is logic, so it’s not that simple. You can slip XORs between any number of items — Bob XOR Carol XOR Ted XOR Alice, for instance — and in the general case, XOR is true when an odd number of things are true.

So XOR isn’t totally stingy, but you might not like the results. You can have any one of coffee, tea or milk — or you can have all three mixed together, because three is an odd number. Wake up and smell that in the morning, I dare you.

Outside of pure math operations, XOR has some interesting practical uses. It’s used when generating random numbers to ensure that “random” really is random. XOR is also used in cryptography, sometimes alone as a simple “XOR cipher”, but usually as part of a more complicated system.

And there’s something called an “XOR swap algorithm”, which I don’t actually understand at all, but I assume has something to do with Bob caking Alice’s pie while Carol milks Ted’s coffee. Or something.

The important thing is, there’s “or” and then there’s XOR. So if you’re offering someone a choice and feeling particularly stingy, Scroogy or Highlander-y, remember the “exclusive or” that the math and logic types use. Because “or” is fine — but XOR is delicious.

Actual Science:
University of Maryland CSThe magic of XOR
MalwarebytesNowhere to hide: three methods of XOR obfuscation
Logic.lyXOR gate
CCSIThe XOR problem and solution

Image sources: PSU / Teaching with Databases (XOR Venn), Flying Monkey Philly (pumpple cake, which is somehow actually a thing), The Daily Banter (shiny happy Kurgan), 429 (B, C, T, A)

· Write a comment
· Tags: , , , , , ,

· Categories: Computers
What I’ve Learned:

Zombie computer: beware the night of the living Dells.
“Zombie computer: beware the night of the living Dells.”

Zombies are kind of a big deal these days. If you’re a fan of TV or movies or video games, you’ve surely seen them — and like actual zombies, they’re still multiplying. It’s like somebody ran a zombie through a Dr. Seuss-ifier:

You’ve got fast ones and slow ones and now one with an ‘i’.
They crave brain, feel no pain and just want you to die.

There are zombies that walk and zombies that talk and zombies that grin like Fairuza Balk.
Some zombies dance and others fight plants and by now, one of them might be Jack Palance.

(Sorry. Too soon?)

The point is, zombies are everywhere in fiction — but they’re also everywhere in real life, in an insidious form you don’t often hear about. I’m talking about zombie computers, and there are millions upon millions of them just waiting to eat your… well, not brains, exactly. But probably your bandwidth. And these days, that’s just as bad.

A zombie computer — or just zombie, if you like — is a device that’s been taken over by a malicious user or bit of software, and now unquestionably does the bidding of its nefarious master. Once the machine is hacked into or infected with a virus or Trojan horse or computer worm, it can become a zombie without anyone around it ever knowing.

(Unlike zombie humans, zombie computers apparently don’t decompose, start to smell or shuffle down the street mumbling, “CPUUuuuuus, CPUUuuuUUUUSSss…” So they’re harder to identify.)

And while Dr. Frankenstein used his “zombie” to terrorize the townspeople or a voodoo priest might use a zombie army to, I don’t know, make a really big batch of jambalaya, maybe, controllers of zombie computers usually have much, much more sinister stuff in mind.

Like spam.

The puppet master of a bunch of zombie computers can coordinate them into something called a “botnet”, which is just a big gaggle of infected computers doing whatever they’re told. And some people tell them to send billions upon billions of junk emails to people all over the world.

Security experts estimate that roughly two-thirds of all email sent is “spam” of some kind, and much of that — up to eighty percent, according to one study — comes from zombie computers in botnets. It’s thought that a ten-thousand computer botnet — which is not particularly large; botnets have been seen with over one million zombie computers — can send up to fifty billion emails in a single week.

That’s “billion”, with a “b”. Kinda makes those zombie hordes on TV look like a couple of kindergarten kids, eh?

Of course, zombie computer masters can do worse than flood a few (billion) inboxes. Botnets can also be used to artificially generate hits on websites, to generate so many simultaneous hits that sites effectively shut down — known as a DDoS, or distributed denial of service attack, very nasty — identity theft, bank fraud, extortion, espionage and, of course, to recruit more victims. What good would a zombie computer be, if it didn’t reach out and bite a few uninfected innocents?

So enjoy the science fiction shows and films and games featuring “scary” zombies that can’t actually crawl out of the grave and get you. But be wary of that laptop or PC that you’re watching or playing on. That could be a real zombie, sitting in your very own living room. Maybe even on your lap.


Image sources: Pocket Fives (zombie computer botnet), Design and Trend (i[cecream]Zombie), Socialite Life (Balk, batty), The Var Guy (botnets after your braaaaaaains…)

· Write a comment
· Tags: , , , , , , , , , ,

· Categories: Computers
What I’ve Learned:

Swarm robotics: You guys like swarms of things, right?
“Swarm robotics: You guys like swarms of things, right?”

Lots of great things come in swarms. Hornets. Locusts. One Direction fans.

Okay, so none of those things are particularly great. But robots are pretty great, and now robots come in swarms, too.

Swarm robotics hasn’t been around long, since it requires robots with three characteristics of animals that swarm together: small size, good mobility and cheap production.

And in the case of 1D fans, squealiness. But that’s not as important.

The concept behind swarm robotics is borrowed from biology, and is called “emergent behavior”. Basically, it’s the idea that a bunch of mostly-identical critters of limited intelligence can work together to do something useful that they couldn’t manage as individuals. In nature, that might be to migrate to a new nest or strip a cornfield down to its roots. Or to vote Harry Styles dreamiest Teen Beat dreamboat.

Happily, when it comes to swarm robotics, the mechanical critters — or the people programming them — are usually more sensible.

The ultimate goals of swarm robotics include things like digging mines or harvesting crops or building structures. Someday, particularly tiny robots might scurry into our bodies to clear out arteries or slice up a tumor or slap together a new liver.

Or they’ll take over the planet, build a machine city and plug all of surviving humanity into the Matrix. Which would be slightly less helpful.

For the moment, scientists are limited to current robot technology, which includes wheeled self-assembling Rubik-sized cubes and coin-sized microbots that skitter around on toothpick legs. Neither is very impressive in the singular — they’re like miniaturized Roombas that don’t bother to vacuum any more. But with a bunch of these robots (and the right programming), engineers can do some pretty interesting things.

With a few simple instructions, for instance, swarm robots have assembled to pass obstacles a single unit couldn’t navigate, and to collectively move objects much heavier than any component robot. There are even termite-inspired projects with robots that cooperatively figure out how to build simple structure designs. And recently, a team at Harvard University coaxed the largest-yet swarm of teenybots — over one thousand strong — to arrange themselves into specified shapes, using a set of extremely basic rules.

So long as one of those shapes wasn’t “Skynet”, we’re probably going to be okay. For a few more years, at least.

The real power of swarm robotics comes with numbers. As the motors and sensors and other fiddly bits get smaller and cheaper, scientists can put more of their robo-critters into action. For some jobs, it doesn’t matter if one, or even half, of them fails. Sheer numbers — and a few snippets of code — will see them through larger and larger tasks. It’s like having a nest full of insects ready to do your bidding, or a tiny team of not-especially-bright butlers waiting to serve your every whim.

So while our future could hold Matrix enslavement — or worse, an endless horde of angry Benders — for now, swarm robotics is a promising field that may help us solve some very tricky and important engineering problems.

Like getting rid of One Direction. Seriously, robotics people. How come none of you is working on that?

Image sources: RedOrbit (sea of Kilobots), Zimbio (squealy concert girls), Gunaxin (Matrix robot swarm face), Den of Geek (Bender horde)

· Write a comment
· Tags: , , , , , , ,

· Categories: Computers
What I’ve Learned:

Turing test: where men are men, except sometimes they're not.
“Turing test: where men are men, except sometimes they’re not.”

Like most computery sorts of things, Turing tests are only properly understood by a few pale geniuses who know how slide rules work and never have anything to do on Friday nights.

(Except to argue about the “proper” understanding of Turing tests. And catch up on Eureka reruns.)

But a Turing test basically boils down to one simple question:

Can a computer convince an “average interrogator” that it is human — and not, in fact, a computer?

The Turing test was first proposed by British mathematician and computer scientist Alan Turing in the 1950s. This was around the time when people first began to wonder whether machines could someday think on their own. Only nobody could precisely define what constituted “thinking”, exactly, and computers the size of post offices could scarcely rub two digits together, so the question went mostly nowhere.

That’s when Turing — now considered the father of theoretical computer science — posed his question, which was much easier to test. Whether the computer can “think” or not, can it fool people into believing it’s a live, thinking person? Turing tests come in a few flavors, but they mostly work like this:

An interrogator types questions to two test subjects — one of whom is flesh-and-blood human, while the other is motherboard-and-capacitor machine. Each answers via text — no cheating where one of the voices sounds like C-3PO or Bender — and after a few rounds, the interrogator decides which is the human. If he or she chooses the computer more than thirty percent of the time, the machine passes the Turing test.

And then Skynet probably gets built and melty-face Robert Patrick comes back in time to kill John Connor and all of human civilization will depend on Arnold Schwarzenegger saving our asses again, which seems way less likely now that he’s a member of the AARP.

Still, people perform Turing tests. Probably because they want to know when to start hoarding canned food and electromagnetic pulse bombs.

For decades, Turing tests were pretty non-apocalypse-portending. A gadget might do okay with questions limited to one subject, or when trying to pass itself off as a paranoid schizophrenic. But until recently, no computers had gotten especially close to passing a Turing test using what science would consider an average interrogator.

(In contrast to life outside science, where Siri and chatbots and Japanese girlfriend simulators have been talking people into giving away money and marriage proposals for years.

Clearly, the bar for “average interrogator” is just a leeeetle bit lower in some parts of the digital world.)

But just this week, a computer managed to pass a Turing test administered in London, fooling thirty-three percent of interrogators into believing it was a thirteen-year-old boy.

Which some might argue is only a small step up from a paranoid schizophrenic. Still — progress.

Of course, many in the artificial intelligence community don’t pay much attention to Turing tests. There are many ways to run one, and (see above) many ways in which people allow themselves to be fooled by relatively unsophisticated programs. Besides these difficulties, some computer scientists question the very relevance of Turing tests in the modern age. The goal of AI, after all, is to make machines more intelligent — not to make them more like humans.

I’ll leave it to the reader to decide how wide the chasm is between those two goals. Just remember — we created Two and a Half Men. So there is a chasm. Clearly.

Image sources: Computer Science Unplugged (Turing test cartoon), Futurama Point (angry Bender), What Culture (Robert Patrick/T-1000), OffTopics (The Termin-older)

· Write a comment
· Tags: , , , , , , , , , , , ,