AI and the Horse it Rode in On
- briangparker63
- 3 days ago
- 4 min read
Part I: The Horse
Artificial intelligence (AI) is technology that lets machines mimic how humans learn, understand, solve problems, make decisions, and create.
AI-powered apps and devices can recognize what they see, understand, and respond to human language, learn from new data, give detailed recommendations, and even act on their own — like a self‑driving car.

Full disclosure, I use AI. Sometimes Google Gemini or Venice.ai to generate the images I use in My Mind and Welcome to It, or sometimes Microsoft Copilot to outline a presentation. Those are the big AI applications you hear about in the news (along with more controversial ones like Elon Musk’s Grok). There is also a slew of industry-specific AI applications, too.
Whether you know it or not, you use AI. A lot. Chatbots, whenever you call for service (the ones that admit to being AI from the get-go, and the ones that pretend to be human until you get pissed off enough to demand a real human, and the sneaky ones that take over for the original AI and pretend to be a real human even though they’re really just AI junior. Grammarly, which has been around a while and offers helpful suggestions to improve your writing (and mine). Your car, which warns you that you’re about to back into something, getting too close to the car in front of you, controls your cruise, tells you where to go, and sometimes does ALL the driving. Your phone uses AI, and so do digital assistants and social media. Many home gadgets use it too, like robot vacuums and security systems. AI is everywhere and unavoidable.
AI, or the idea of it, has been around for at least 76 years, when Alan published “Computer Machinery and Intelligence”.
AI, or the idea of it, dates to ancient Greek mythology when Hephaestus created automated handmaids and gave them the knowledge of the gods. And the first step towards actual AI (and, by the way, the industrial revolution) came in 250 BC when Ctesibius built a self-regulating water clock—the world's first automatic system.

In 1206, Ismail al-Jazari wrote the first record of programmable automation, which later solidified him as the Father of Robotics. Centuries later, Leonardo da Vinci, known for his extensive research into automation, expanded on these ideas, and designed—possibly even built—an armored mechanical knight in 1495.
In 1726, Jonathan Swift wrote in Gulliver’s Travels of a machine called “The Engine”—the earliest known reference to a computer. Then, in 1872, Samuel Butler suggested Charles Darwin’s theory of evolution could be applied to machines.
Between 1819 and 1822, mathematician and inventor Charles Babbage built the Difference Engine, a mechanical calculator designed to tabulate polynomial functions. More than a century later, in 1939, physicist John Vincent Atanasoff and graduate student Clifton Berry created the first digital computing machine. It wasn’t programmable, but it could solve up to twenty-nine linear equations at once.

So, back to the modern age. AI, or the idea of it, has been around for at least 76 years, when Alan Turing published “Computing Machinery and Intelligence,” where he asked whether machines could show human-like intelligence. In it, Turing introduced The Imitation Game—now known as The Turing Test—which has become the foundation of AI and its development.
The Turing Test checks whether a machine can hold a conversation so naturally that a human judge can’t tell it’s not a person. If the judge can’t reliably spot the machine, it passes. Importantly, the machine doesn’t have to be right or wrong; it just has to fool the evaluator.

In 1956, the IT guys at Los Alamos (apparently fresh out of A-Bomb ideas) developed MANIAC, which became the first computer to defeat a human in a chess-like game. Playing with the simplified Los Alamos rules, it defeated a novice in twenty-three moves. Chess seemed like a pretty good way to make progress in improving AI, so Human-Computer chess matches have been a frequent occurrence throughout the past 70 years. By the late 1980s, chess computers could finally beat strong human players. Their biggest milestone came in 1997, when Deep Blue defeated World Chess Champion Garry Kasparov.

In the 1960s, a team at MIT created ELIZA, an early chatbot that mimicked conversation by matching patterns in what users typed and swapping in scripted responses. This made it seem understanding, even though it didn’t grasp meaning. Because of this, ELIZA became one of the first programs capable of attempting the Turing Test.
IBM included a floppy disk of the program in many of its home PCs during the 1980s so users could have fun being psychoanalyzed by (and being rude to) ELIZA. This is probably where a lot of Gen-Xers and Millennials learned to troll people online.
In the 1980s, computer scientists turned to machine learning—training algorithms to find patterns in data and make decisions on their own. A key approach is the neural network, inspired by the human brain and built from layers of connected nodes that excel at spotting complex patterns. Neural networks often use supervised learning, where humans provide the correct answers during training so the model can learn to label new data accurately.
Over the next 40 years, advances in machine learning, computing power, and memory led to deep learning—a more powerful approach that uses multilayered neural networks. Unlike earlier neural networks with only one or two hidden layers, deep neural networks can have dozens or even hundreds. These many layers let the networks learn from massive amounts of unlabeled, unstructured data and make predictions without human guidance. This makes deep learning ideal for tasks like natural language processing and computer vision, and it now powers most AI applications we use today.
Which brings us to the 2020s and the new era of Generative AI, and the crossroads we face in its shadow: Will we be the victims of Skynet, or the beneficiaries of Star Trek’s utopian version of an egalitarian postcapitalist future?
© 2026 Brian G Parker
Sources:








Comments