Orange River Media – Did Star Trek Get A.I. All Wrong

Orange River Media – Did Star Trek Get A.I. All Wrong?

Hey guys, Tyler here. So, Star Trek has always been presented as a future where humanity has overcome its differences to forge a new path beyond our petty squabbles here on Earth. We interact with members of countless other species, both allies and enemies, in the quest to satisfy our thirst for knowledge. But there’s a major component of Earth’s future society that aids us in our exploration of the galaxy, and that component has been given more and more attention with each successive chronological instalment of Star Trek. I’m talking, of course, about artificial intelligence.

A.I. has long been a fascination of science fiction writers and real-world researchers alike for practically centuries. For a long time a far-off, hypothetical concept as fantastical as warp drive itself, today, artificial intelligence is real and exists in so many facets of our lives. Trek lightly approaches the subject in a few episodes of The Original Series, presenting A.I.s that are simple and easy to control. This is evident not only with Landru, ruler of Beta III, or Mudd’s androids, but also the Enterprise computer itself. But in The Next Generation era, the character of Data adds a level of complexity to narratives about A.I. that often evokes themes regarding civil rights and the philosophy of consciousness.

Data’s positronic brain is incredibly sophisticated, a remarkable feat of engineering that has stupefied many contemporary engineers in its ability to simulate personhood. In his journey to becoming more human on a social level, Data experiments with numerous methods such as augmenting himself with an emotion chip. Later, android-organic hybrids such as Dahj and Soji Asha take this augmentation to the next level, blurring the line between organic and artificial life in a way paralleled only by the Borg. But in the face of all this, one question remains: Star Trek’s utilization of seemingly “impossible” technologies such as transporters and FTL drives aside, is its “predictions” about progress in creating human-like artificial intelligence too optimistic or not optimistic enough?

Did Star Trek Miss the Mark on Artificial Intelligence?

The Case for Slower A.I. Development

(Paramount+) The development of human-like A.I. can be a double-edged sword

Much of modern science fiction that revolves around A.I. and robotics is heavily influenced by the works of Isaac Asimov, who wrote the seminal novel I, Robot in 1950. A collection of short stories published over the preceding decade, I, Robot not only had a substantial impact on the field of ethics in computing but also introduced the idea of a “positronic brain” that functions as a CPU. The term “positronic” was based on the then-recently discovered positron, or anti-electron, a subatomic antimatter particle that creates photons when it collides with electrons at low energies. The positronic brain enables the transmission of thoughts and impulses in a robot and, through an unspecified process, helps relay cognitive information involved in decision making.

Needless to say, the positronic brain is a fictional device—a hypothetical device on par with transporters and FTL drives. We don’t know exactly how one would be constructed, and the theoretical physics involved in its design are still in their relative infancy. But that hasn’t stopped A.I. engineers and researchers from embarking on a quest over the last half-century or so to develop A.I.s that approach human intelligence—not just in terms of computing power but in terms of emotional behaviour. While we’ve certainly built some impressive machines that can beat humans in chess or perform tasks that are beyond our strength, we still haven’t necessarily cracked the code when it comes to simulating consciousness—particularly the ability to learn and solve problems.

While some might observe that we are closer than ever to “perfecting” artificial intelligence—whatever that means—it’s also important to consider that the very definition of A.I. is constantly shifting. This is in part due to the lack of consensus among competing subfields in A.I. research—the divide between machine learning and logic-based neural networks as well as differences in philosophy that have ultimately left A.I. in a state of perpetual goalpost shifting. This is known as the “A.I. effect,” a phenomenon in which researchers will discount real gains in the field by saying it’s not “true” A.I.

Examples of this include the dismissal of optical character and speech recognition, mastery of strategic gameplay, self-driving cars, intelligent routing in delivery networks, and military simulations as “been there, done that” in terms of developing routine technology. These feats—which would have understandably been considered science fiction in, say, the late ‘50s when the A.I. discipline emerged—are not considered “impressive” anymore, and the cutting edge of the field is looking FAR beyond such “simple” accomplishments in retrospect. There’s a theorem called Tester’s Theorem that can be summed up as, “A.I. is whatever hasn’t been done yet.”

Terminator
(Skydance) It Is Possible That Not All AI’s Will Be Like Data…

Let’s switch back to Star Trek for a second. This similar effect can be observed when thinking about A.I.s in The Original Series era versus The Next Generation era. As I mentioned earlier, the Enterprise computer acts as sort of an A.I., able to recall information, perform diagnostics, and make logical assessments—but it largely follows human commands. In this way, one could argue that it’s not “real” A.I.—and most of the “real” A.I.s, particularly androids, encountered by Kirk’s Enterprise were built by aliens. Between this and other technologies such as the DOT-7 repair robots seen in Discovery, it’s clear that Starfleet A.I.s in the 23rd century are very rudimentary in comparison with the likes of Data in the 24th century.

What makes Data special is his positronic brain—a revolutionary step forward compared to the duotronic and Multitronic-based systems of the previous century. Activated in 2338 in the Omicron Theta Colony, Data was the fifth android built by cyberneticist Dr. Noonian Soong. Soong’s great-grandfather, Arik, had been fascinated with the prospects of genetic engineering and raised a group of Augment children developed from embryos leftover at the end of the Eugenics Wars. Noonian Soong’s work was conducted largely outside the purview of Federation authorities, who might not have greenlit such a project with state funding.

As stated in The Next Generation episode “The Measure of a Man,” one of the biggest obstacles in creating a stable positronic brain was determining how electron resistance across brain filaments would be resolved. Technobabble aside, this is indicative of the fact that positronic engineering evidently involves a lot of physical considerations that must be addressed, especially given we’re dealing with antimatter. Further, it’s likely that Soong was not the first to attempt such a feat—he simply succeeded where others had fallen short. Additionally, given the Federation’s scepticism of conscious, human-like A.I., rooted in a humanist philosophy that values organic life above machines, it’s not surprising that it took this long for Soong to accomplish what he accomplished.

…Or is it?

The Case for Faster A.I. Development

Rutherford Implant
(Paramount+) Just LIke Rutherford, Humans May Require Implants To Keep Up

See, the thing is, technology is advancing at an ever-accelerating rate even today. While Moore’s Law—which states that the number of transistors per square inch on an integrated circuit doubles every two years—has been slowing down, revolutionary techniques involving 3D circuits and quantum computing will likely continue to push hardware beyond its modern limits. Regardless of hardware, though, the software involved in machine learning and complex artificial neural networks is getting more complex by the day, meaning we’re getting closer than ever before to the so-called “Singularity,” when the pace of technological growth exceeds our ability to understand it.

Artificial general intelligence, or AGI, is the ability of a machine to learn and perform any task a human can, and it’s supposedly just around the corner. Sometime this decade, it’s expected we should be able to construct a machine that matches or even exceeds human intelligence, and in thirty years, a machine that can process information faster than all humans combined…it will create a runaway effect that will eventually force humans to receive neural upgrades for us to comprehend the ever-changing technological world. In that way, Star Trek did miss the mark…it presumed A.I. development would be too slow, even taking World War III and the post-atomic horror into account. Surely other Federation species would have developed sufficient AGI by the 22nd century, right? Shouldn’t this world be more like Detroit: Become Human? Okay…not so fast.

The Truth

(Paramount+) A.I. Can Take Many Shapes And Forms

The truth is, when it comes to these predictions about AGI being achieved in this decade or the next and leading to a runaway technological Singularity a la Terminator, there are good reasons to be sceptical. Many predictions have already been made about the achievement of AGI—some researchers thought it would be possible by the 1980s, but that decade came and went. In fact, the concept of exponential advances in A.I. as a function of Moore’s Law is flawed in its own way. Imagined by famed futurist Ray Kurzweil, this prediction is based on another logarithmic plot he developed to showcase advances in computing power over the centuries—but one critique is that these milestones are cherrypicked, the logarithmic scale being inappropriate for what have traditionally been linear advancements. Additionally, his prediction about hyperintelligent A.I. developing in the 21st century doesn’t take into account various obstacles A.I. research faces.

There are numerous theories as to why A.I. isn’t advancing as quickly as many had hoped…or feared. But most of these explanations boil down to the reality that human cognition—driven in large part by emotion—is not very well understood. Human neuropsychology is a field that is ever-evolving, and overcoming our gaps in understanding the brain will be critical, much say, to solving a lot of problems facing A.I. research. It’s hard to say how long this would take—but given the complexities of understanding the mind, I wouldn’t be surprised that “a few hundred years” is really what it would take for advances in that field to fit the mould of the puzzle pieces missing from AGI.

Four polls of A.I. experts conducted in 2012 and 2013 yielded some interesting results: when asked how long until they would be 50% confident AGI would arrive, the median year was 2045, and the mean was 2081. But 16.5% of these experts predicted—with 90% confidence—that AGI would never exist. Roboticist Alan Winfield once wrote in The Guardian that the gulf between modern computing and AGI is as wide as the gulf between current space travel and faster-than-light propulsion. This is an interesting framing, and when applied to Star Trek, it would seem to suggest that humanlike androids should have been feasible around the late 21st century anyway, which is when Zefram Cochrane conducted his historic warp flight.

But there are other considerations. Regarding “The Singularity”, futurist Martin Ford argues in his book The Lights in the Tunnel that massive unemployment due to automation—a real concern among many analysts—would reduce consumer demand and destroy the incentive to invest in technologies that would bring about the Singularity. He calls this a “technology paradox,” pointing out that the level of technology required for widespread automation, even of white-collar jobs, is far inferior to AGI. This is because most routine tasks—and even some non-routine tasks—can be accomplished with very narrowly defined programming. Microsoft co-founder Paul Allen has observed that human creativity has diminishing returns as opposed to accelerating ones, as measured by the slowdown of new patents since the mid-to-late 19th century. The more progress science makes towards understanding intelligence, he says, the harder it will be to achieve.

Home Assistants
Are Today’s Home Assistants Are The First Step To AI’s?

This is a similar argument used for why transhumanism may never become mainstream: aside from cultural resistance towards merging biology with technology, on a practical level, the investment necessary to provide affordable upgrades for everyone will likely be hindered by both economic and computational factors, many argue. Also, that cultural resistance isn’t necessarily unfounded: if you have a perfectly functioning arm or leg, there’s no reason to replace it with a prosthesis—and as far as augmenting eyesight or intelligence or something like that, well, that’s a recipe for creating a stratified society. More likely, these “upgrades” will be limited to niche subcultures. Star Trek understands this in a way that I think is underappreciated in modern sci-fi: what makes humans human is what will empower us to do great things, and technology is simply a tool available to aid us in our own endeavours. There are some cyborgs in the 23rd and 24th centuries, but these individuals’ implants are often the result of life-saving medical procedures. Besides, the aftermath of the Eugenics Wars and World War III has still left its mark on Earth society even far into the future.

As far as androids and A.I., then, it makes sense that huge leaps in the field of artificial intelligence would experience various setbacks as expectations about the field change. The myriad of scientific disciplines involved in understanding A.I.—computer science, psychology, linguistics, philosophy, et cetera—all has unique problems in and of themselves that will need to be overcome for AGI to be viable. This is not to mention the taboo that could arise from trying to give emotions to machines—perhaps in the 21st century of the Star Trek timeline, Earth did have powerful A.I.s that could perform specialized tasks of incredible difficulty, and these machines caused social upheaval.

In any event, taking the time between the present and the mean date that AGI is estimated to be achieved—2081—and multiplying that timespan by a factor of four, we arrive at a date of the 2320s—not too far off from the year that Dr. Soong first activated Data. Where do I get the number four? Well, it is admittedly arbitrary, but the purpose is to demonstrate that in Trek’s world, a number of obstacles—technological, sociopolitical, cultural, economic, ethical, and otherwise—will have to be overcome in order for A.I. to truly approximate human consciousness. And of course, Data is not the end all be all—his “children,” so to speak, may represent the “next generation,” so to speak, of human-A.I. interfacing in the Trek universe. This would put humanity at a crossroads where we must decide how to avoid falling into the trap that the Borg did—using cybernetics to oppress individuals and other cultures rather than to make them freer.

So, did Trek miss the mark on artificial intelligence? I don’t think so. I think that the timeframes it established are perfectly reasonable given the societal conditions present in the Federation, even with a level of scientific prowess that is probably beyond what we will achieve in real life. World wars, social taboos, engineering obstacles, and numerous other factors meaning humans don’t create AGI until the 24th century is just as legitimate, I think, as saying it takes us until the 4th millennium—it’s so far in the future that the number of breakthroughs required in neuroscience and computer engineering is profound to us today, even with our world being as technologically advanced as it is.

In the meantime, thanks for watching! I’m interested to hear your thoughts in the comments. If you haven’t yet subscribed, be sure to do that as well so you won’t miss future uploads and click the bell icon to receive all notifications. If you want to support my work even further, becoming a YouTube member or a patron at patreon.com/orangeriver is a great way to do so..

Watch The Latest Video By Orange River Media Below

I’ll see you in the next video…live long and prosper.

You can find Orange River Media at the links below

Join the conversation

Leave A Comment

« Due to GDPR, EU users need to be logged into Facebook to read and leave comments »

Share this post

Facebook
Twitter
Reddit
Email
WhatsApp

Latest From Treksphere