Artificial Intelligence
Artificial Intelligence (AI) has received a substantial amount of coverage in the media over the last few weeks. In March, Open AI released GPT-4 which appears to be a big, unexpected leap in AI capabilities. A few weeks later, Google released a ChatGPT competitor, known as BARD. Last week a small group of AI executives testified before Congress on the potential perils of AI. They implored Congress to draft legislation to regulate their industry. A few days later, the Center for AI Safety released a short essay warning that AI could cause extinction-level events. They also called for Congress to regulate AI.
But asking technologically illiterate politicians to create regulations for AI is not destined to work out well. In the past, government stepping in to regulate new technologies has resulted in less competition, less innovation, and greater concentrations of economic power in the hands of established companies. Giant corporations like Microsoft and Google have the resources to comply with government regulations. Small startup companies do not. Many AI startups will be forced to get acquired by one of the large players at a fraction of the price the company would command in an Initial Public Offering (IPO) on the stock market. Also, any regulations by U.S. policymakers would do nothing to slow China’s quest to use AI as a military weapon against us.
As America and the world push ahead with AI, it is important that we have a clear understanding of this technology, including the different types of AI, their potential benefits and risks, and the progress being made in each area.
There are fundamentally two different types of AI. Weak AI is used to achieve specific goals, like evaluating loan applications, recognizing faces, and even driving a car. Weak AI is currently an evolution of the data mining / predictive analytics efforts of the early 2000s In Weak AI, various types of statistical algorithms, including deep learning neural networks, are used to evaluate voluminous amounts of data to achieve a specific goal.
Strong AI, also known as Artificial General Intelligence (AGI), focuses on creating computers that can reason generally about the world around them like a human being. As the MIT Technology Journal recently reported, progress in AGI isn’t as impressive as one might think. Computers aren’t even close to being able to reason like a human. The raw computer hardware doesn’t yet exist to do this. And researchers don’t know what creating the software for an AGI would entail. Very little investment is going into AGI, compared to weak AI. So, progress is likely to be slow in the foreseeable future.
Even the seemingly miraculous feats of GPT-4 still lie within the bounds of Weak AI. ChatGPT was fed a voluminous amount of information—basically the results of the Bing search engine’s Internet crawlers through Dec 2021. It uses a deep learning neural network to pick from a list of the next probable tokens (like words) to generate its output. It doesn’t derive any meaning from the output. And it is notorious for generating output that is inaccurate. In other words, it doesn’t know what it is talking about, and it often lies.
Even though nothing like Skynet from the Terminator movies is on the horizon, Weak AI still has the potential for misuse by criminals and criminal governments. In fact, it has already been used to do harm. Let’s examine some areas where AI is being applied and look at the potential benefits and perils of each.
Your Own AI Work Assistant
Within a few years, almost everyone who wants one will have their own, work-specific personal assistant, based on generative AI. It is already starting. For example, Microsoft just released an AI-based “copilot” that can write unit tests for software developers. Microsoft and other companies have many more such AI-based assistants under development. These assistants will automate many tedious tasks, remove much of the drudgery from work, and help people become more productive.
The perils stem from the fact that generative AI systems can be notorious liars. Unfortunately, most people will instinctively just believe what the AI tells them. This will allow misinformation to quickly spread, resulting in poor decisions, financial loss, and in some scenarios the loss of life.
Deep Fakes Get Real
One of the contestants in the last season of America’s Got Talent was an AI company that could produce “deep fake” videos in real-time. Their audition featured a man singing live on stage in front of one of their cameras. But the screen above him showed one of the judges, Simon Cowell, doing the singing.
Within 10 years deep fakes will be indistinguishable from reality. Hollywood, for example, will be able to use this technology to resurrect deceased actors to star in photorealistic movies. Epic Games is a local, Cary-based company that is working on this technology. Their Unreal Engine can produce near photorealistic landscapes and people.
Deep fake technology also poses some perils. Recently, criminals tried to extort money from a family by claiming that they had kidnapped their daughter and were holding her for ransom. They used AI to imitate the girl’s voice begging for help.
It will soon be possible for blackmailers to generate realistic videos of you having an extramarital affair. If the Clinton campaign had this technology in 2016, they could have made the fake Russian dossier come to life, complete with Russian hookers urinating on Donald Trump. This could have changed the outcome of the election.
And deep fakes can even be more dangerous. Can you imagine what might happen if someone leaks a deep fake video of the President of the United States and the Joint Chiefs of Staff plotting a nuclear first strike on an adversarial country? Would the adversary wait for an explanation or launch a first strike themselves?
The Rise of The Cyborgs
Human-machine integration is nothing new. Anyone whose life depends on a pacemaker can attest to that. But AI-based human-machine integration will take the capabilities to an entirely new level. For example, Neuralink, which is one of Elon Musk’s companies, is working on a brain implant that they hope will help paraplegics walk again. I have a friend who is a paraplegic. This technology could be great news for him and others. And many additional transformative scenarios could be possible within a few short years.
The perils of this technology are literally mind-boggling. Will the government or giant corporations be able to read your thoughts? Will they be able to plant thoughts in your brain? Will they be able to control you? It’s difficult to say for certain, but the potential for abuse of AI-based human-machine integration is large.
A Revolution in Medicine
AI is entering medicine in a big way. A study done a few years ago showed that using AI to analyze a person’s Google searches could better predict whether a person has early-stage prostate cancer, which is difficult to diagnose, than a trip to the doctor.
Over the next 20 years, AI will usher in a revolution in disease prevention and treatment. Researchers are already using AI to develop individual treatment regimens for certain diseases based on a person’s family history, lifestyle, and existing medical conditions. AI will also usher in an era of personalized designer drugs. It will also accelerate the use of CRISPR to delete or rewrite genetic mutations to eradicate conditions like Downs Syndrome, Cystic Fibrosis, and Sickle cell anemia. AI will help researchers design 100% genetically compatible 3D printable replacement organs. And AI-based autonomous surgical robots are also on the horizon.
The perils of AI in medicine are also great. Recently, there has been a lot of controversy surrounding gain-of-function research that seeks to make naturally occurring viruses more deadly. In the future, AI may also be used to create narrowly targeted diseases. Imagine if warring ethnic groups used AI to create deadly viruses that target the genome of their rivals. Could a government use AI to create a disease to target a specific individual, like a rival politician based on her DNA? At this point, we don’t know, but the potential for harm is frightening.
The Transformation of Transportation
In the next decade or so, level 5, fully autonomous vehicles should be on the road. Autonomous vehicles promise greater road utilization with lower accidents. But achieving the full benefits of autonomous vehicles will require removing human drivers from behind the wheel. So, at some point, the United States and other countries are likely to make driving illegal. And when the joy of driving a car is removed, the emotional attachment of owning a car will likely disappear as well. Car ownership could plummet, with autonomous ride-sharing services picking up the slack, at least in urban and suburban areas. The impact will be much broader than just driving. For example, if you don’t own a car, why do you need a garage? And why would businesses need large parking structures? The entire urban and suburban landscape could change dramatically as a result.
The perils of a fully autonomous transportation system are obvious. Governments will be able to prevent individuals or even entire populations from traveling. During the COVID fiasco, many governments tried to get their citizens to stay at home. Many people like me didn’t obey. Autonomous vehicles will enable governments to ensure that people don’t wander far from home during a future lockdown. They will also allow governments to restrict the movements of specific individuals, based on any criteria, including a China-style social credit system.
Killer Robots on The Horizon
The United States, China, and other countries are using AI to enhance their military capabilities. A big area of research is using AI to crack the security of a rival’s military computers. It is well known that China already uses AI to breach the security of American corporations for the purposes of industrial espionage. And just as the auto companies are using AI to create self-driving cars, governments are using AI to create autonomous vehicles. For example, the U.S. Military is testing an AI-based pack animal produced by Boston Dynamics. Autonomous drones are another type of autonomous vehicle being actively explored by the military. The biggest danger of using AI in the military is the potential for fully autonomous weapons that spiral out of control. And yes, autonomous killer robots are well within the capabilities of Weak AI. No AGI is needed. The U.S. has recently launched an international effort on the responsible use of AI in the military. But Russia was not invited to the first meeting and China did not attend.
What The Future Holds
I have worked in technology for over 30 years. So, I always like to point out that a specific technology itself is not good or bad. It depends on how the technology is used. AI isn’t any different. It has already been used for both. But some AI researchers are so alarmed by the potential perils of AI that they’ve called for a temporary moratorium on its development. Unfortunately, countries like China, India, and Russia are unlikely to pause their AI research. So, AI development will continue at full speed ahead. Society will reap the benefits. And we will also have to learn how to deal with the consequences as they become known.
The Blankenburg Report
Eric Blankenburg
Eric is a husband, father of four, technology guy, U.S. Air Force veteran, and left coast refugee. He is a lifelong conservative and “disgruntled” Republican, who has sought ways to help the GOP live up to its values. When Eric is not working or spending time with his family, he likes to write about a variety of current issues. Eric is a regular writer for Liberty First Grassroots (LFG).
Commenti