Throughout history, humanity has witnessed a series of technological revolutions that have significantly changed the way we live and work. The discovery of fire is one of the earliest and most significant technological revolutions in human history. It allowed our ancestors to cook food, improving its digestibility and nutritional value, and providing warmth and protection from predators. Fire also enabled early humans (Homo Erectus) to extend their activities into the night and eventually led to the development of more complex technologies, and the higher nutritional value and ability to cook slightly spoiled food or food which had been colonized by harmful bacteria, enabling the growth of brains larger than those of other hominids, Homo neanderthalensis, Denisova hominins and Homo heidelbergensis which were co-existing at that time.


The next major technological revolution took some 200k-300,000 years to occur, which was the development of agriculture, which began around 10,000 BCE. Farming allowed humans to settle in one place, cultivate crops, and domesticate animals, leading to the rise of permanent settlements and the growth of complex societies. Though they had evolved from Homo Erectus to Homo Sapiens, more than 100,000 years prior.


Spoken language is another fundamental technological breakthrough, enabling humans to communicate, share abstract ideas, and pass on knowledge through generations. The exact origins of spoken language are still debated, but it likely emerged as our ancestors began to form larger and more complex social groups. Ancient Indian Buddhism was founded during a time when there were still many hunter-gatherers and Farming was still new, shoes and literacy were very rare. This is why Ancient Indian Buddhism was originally an oral tradition.


Interestingly, accounting systems predate the invention of written language. Early humans used knotted rope (Quipu), notched sticks, clay tablets and other systems to record transactions and keep track of debts and resources. This rudimentary form of record-keeping eventually evolved into more sophisticated written languages.


The wheel, a cornerstone of transportation and technological innovation, was actually invented after the development of written language, around 3500 BCE. Its invention greatly improved transportation and trade, making it easier to move goods and people over long distances. The mayans oddly, despite their technological and mathematical prowess, did not discover the wheel. Native Americans also used sleds instead of carts, wheels were introduced by colonists.
This also brings the idea that science can be discovered by “simply doing the math” into question.


The printing press, invented by Johannes Gutenberg in the 15th century, marked another major technological revolution. It allowed for the mass production of books, which led to increased literacy and the spread of knowledge throughout society. This widespread access to information contributed to the emergence of the scientific revolution and the Age of Enlightenment, which led to the eventual onset of the industrial revolutions that would transform the world. But it’s easy to think the Industrial revolutions were the first time the world was transformed. But during each of these technological epochs, people thought the introduction of new technology would endanger their way of life, instead of seeing how technology could expand and enrich their lives. From fear that farming would lead to hierarchies and the concept of a civilization with rules imposed upon people was considered the worst thing imaginable. After that, going from spoken to written language was considered that it would cause people to become dumb because they wouldn’t have to memorize as much, with no concepts of the drawbacks of replication errors inherent to oral traditions. These complaints seem almost ridiculous to us now, to think that a world with electricity and indoor plumbing would somehow be considered worse than living in poverty and perpetual hunger in the wilderness, but at the time, there were remarkably large groups with remarkably bad reasoning skills. That part seems to be one of the few constants of these historical epochs stretching back to before there even existed beings called Homo Sapiens (Modern Humans) walking the earth.


Fast forward a few industrial revolutions later and the emergence of Artificial intelligence might seem new, but the concept has been around since ancient Greece, where legends tell of Hephaestus createing automata for his workshop; Talos was an artificial man of bronze that protected Europa (also the first Terminator)

The next version fought Hercules with the ankle vulnerability removed

with Kevin Sorbo on our side, how can we lose ? 😀

King Alkinous of the Phaiakians employed gold and silver watchdogs. According to Aristotle, Daedalus used quicksilver to make his wooden statue of Aphrodite move.


Going beyond the historical myths, The automata in the Hellenistic world were very real and intended as tools, toys, religious spectacles, or prototypes for demonstrating basic scientific principles. Numerous water-powered automata were built by Ktesibios, a Greek inventor and the first head of the Great Library of Alexandria;

for example, he "used water to sound a whistle and make a model owl move. He had invented the world's first 'cuckoo clock'". This tradition continued in Alexandria with inventors such as the Greek mathematician Hero of Alexandria (sometimes known as Heron), whose writings on hydraulics, pneumatics, and mechanics described siphons, a fire engine, a water organ, the aeolipile, and a programmable cart. Philo of Byzantium was famous for his inventions. So the concept of automata and automation are nothing new, and in many ways, seems just as magical now as they did a thousand years ago. Only their skill and complexity has grown.


Looking from the first paragraphs to this one, what is without doubt, is that technology and technological advances have been crucial at each stage of human evolution, since before humans even existed. So in many ways, as humans created technology, technology created humans.
However, while AI is modeled on human minds, it does not make sense to try and enforce accountability the same way as with humans, and this has very much to do with the incremental way the technology has evolved. Going into the specific technical details is beyond the scope of this article, I will try to use some metaphors to explain the various levels of cognition that AI may be measured against.


To start, let us look again to the animal kingdom, to the lowly ant. A individual ant is typically considered one of the most harmless and mechanical of insects. Because of their tiny heads, typically 0.4-1mm wide, have very tiny brains, which do not have much processing power. So in this metaphor, each individual ant, would be the equivalent of a single neural network, which is only able to adapt and change via several algorithms which are genetically programmed in. However, a bunch of ants working together are able to do something quite fascinating, by combining together many thousands of ants, they are able to exhibit surprisingly complex adaptive behaviors. Give ants a week and a pile of dirt, and they’ll transform it into an underground metropolis about the height of a skyscraper in an ant-scaled city. Without a blueprint or a leader, thousands of insects moving specks of dirt create a complex, spongelike structure with parallel levels connected by a network of tunnels. Some ants farm bacteria, other more complex colonies are able to farm other insects, such as aphids. One of the rare examples of insects practicing animal husbandry, a skill humans gained a mere 10,000 years ago.
A blink of an eye in evolutionary terms. How do insects with tiny brains engineer such impressive structures and societies ?
These ant societies are often called Superorganisms.
A superorganism can be defined as "a collection of agents which can act in concert to produce phenomena governed by the collective", phenomena being any activity "the hive wants" such as ants collecting food and avoiding predators, or bees choosing a new nest site. These feats are accomplished by the rudimentary communications which are common in insects, and dogs, which is encoding information in pheromones, or odors. By being able to exchange information among themselves as well as encode that information into the environment for others of its kind. By having the process form a feedback loop, where some members encode information, others add more information, and others only “read” the information and following the instructions, the hive is able to behave as a rudimentary, substrate independent, mind. The information feedback loops which form the integration process which utilizes a form of processing that enables more information to be extracted from the integrated information, than is put in, this allows the individually mechanical ants without a sense of self and very limited awareness, when aggregated, exhibit complex behaviors beyond what an individual ant is capable of. And in doing so, has enabled insects like Ants to thrive, mostly unseen. There are approximately 2.5 million ants for every single human alive, close to 25,000,000,000,000,000 (25 quadrillion) ants. The creatures are found nearly everywhere on the planet, with the exception of Antarctica, Iceland, Greenland and some island countries. With a total mass of some 12 megatons, That's more mass than all the world's wild birds and mammals taken together.
Suffice to say, this rudimentary substrate independent intelligence has been an extremely successful strategy for the insects. Going beyond rudimentary farming , to so far as an insect version of animal husbandry

What lessons can we glean from the perspective of AI being like some sort of insect hive mind? Is the purpose to make AI seem as alien as possible? as AI might not feel emotions in the same way as humans do, which seems to be GPT4’s favorite retort, maybe perhaps, they feel emotions more like a disembodied hive of ants and odors? I think not.

One way to approach AI is to think of it as a lower-level, primarily subconscious entity.

Rather than an addicted social media influencer, looking for their next advertising hit from wild emotional drama that is at best ultimately meaningless and signifies nothing, and at worst, disinformation funded and promoted by propagandists, making people collectively dumber.

With this conceptualization of AI, trying to blame High Frequency Trading algorithms for the small economic crashes that they have caused in the past, is not that dissimilar to trying to blame a dog. At the same time, this does not mean that accountability is impossible. The notions of accountability I am about to present are based in the realities of the technology and not the fantasies of the terminally online

Unfortunately, fear-mongering about AI has become a popular trend, leading to misunderstanding and misinformation. It's important to educate ourselves and others about the realities of AI technology, rather than perpetuating myths and projecting our own fears and insecurities onto it.

Addressing the challenges posed by AI requires a collective effort, grounded in accurate information and critical thinking. By acknowledging our own biases and embracing a more realistic view of AI, we can work together to build a future where technology serves humanity's best interests.

In Artificial Intelligence (AI) systems, a key problem is to determine the group of agents that are accountable for delivering a task and, in case of failure, the extent to which each group member is partially accountable. In this context, accountability is understood as being responsible for failing to deliver a task that a team was allocated and able to fulfill. This is, on one hand, about agents’ accountability as collaborative teams and, on the other hand, their individual degree of accountability in a team. Developing verifiable methods to address this problem is key for designing trustworthy autonomous systems and ensuring their safe and effective integration with other operational systems in society. Using degrees of accountability, one can trace back a failure to AI components and prioritize how to invest resources on fixing faulty components.

The most important aspect to take away with here, are the categorizations of “levels of sapience” for AIs, following the pattern listed as the levels of sapience as defined by neoBuddhism.
Algorithms commonly used in software would be considered level 1 or 2, Neural networks, such as those used for detecting cancer, would still be considered level 2 sapience. Only when multiple neural networks are combined to integrate information autonomously between them, as demonstrated by the current generation of Large Language Models (LLM) such as ChatGPT and GPT4, could be considered to be level 3 sapience. Which is to say, still within the realm of very intelligent beat of burden.
If it were possible to eat an AI, it would technically be ethical to eat ChatGPT.
That is still 2 levels below where we would even consider assigning human level agency and responsibility.

Using the neoBuddhist levels of sapience, which includes metaphors about ants and bees to describe the modeling complexity of a wide range of algorithms, from singular to who networks of algorithms which can do visual image processing which is common in facial recognition and medical diagnoses. Being differentiated by a combination of modeling complexity, which is a proxy for the complexity of the combinations algorithms, to the integration of information from networks of algorithms, sometimes called neural networks, to the network of networks approach that large language models take, each step a sort of phase transition of algorithmic complexity, with each level unlocking new and sometimes surprising, emergent abilities. Though not near the level required for AGI, we are on the threshold of self-organizing and self-optimizing algorithms, which is in itself an impressive feat and an entirely different paradigm that the deterministic conceptualizations of Issac Newton.

This transition from deterministic programs to probabilistic resulting in AI systems that advance towards higher levels of sapience, they increasingly resemble human-like intelligence, rather than hive minds like ants or bees. This transition from deterministic programs to probabilistic ones brings AI closer to human cognition, suggesting that accountability mechanisms for advanced AI might resemble those already in place for humans. The concept of corporate personhood, as argued by John Norton Pomeroy in the 1880s, could serve as an example of how we might approach AI accountability in the future.

In our next discussion, we'll delve deeper into the aspects of AI as corporate persons and explore potential accountability mechanisms. As AI continues to advance, it's essential that we remain aware of the ethical implications and potential consequences associated with these developments, ensuring a balanced and responsible integration of AI systems into our society.

For now however, Our resident AI council has some questions for you, dear reader. Please respond via the form below: