
The mostly American led AI industry shook to the core when the Chinese open-source AI model DeepSeek entered the market to compete with the best AI models from U.S. companies, causing the AI chipmaking giant Nvidia’s stock to lose close to $600 billion in market cap, the biggest drop for any company on a single day in U.S. history. Nvidia’s stock nosedived 17% to close at $119.58. But since that disastrous day, Nvidia’s stock made a strong rebound, rising 8% higher to close at $128.86 a share as of press time.
It was bound to happen that the Chinese enter and will eventually compete with the Americans in AI. DeepSeek created an AI model that is purportedly significantly cheaper and can deliver similar results as the more expensive super processing chips used by American AI models.
Why is this news important besides tech giants market warfare? The takeaway is that it signals AI is here to stay and the AI industry – that has expanded globally with multiple big players which have made AI cheaper – will not go away despite massive ethical and employment replacement concerns. When tech goes cheap, it matures for long-staying power and mass consumption.
Increased debates on AI are needed
Now what appears to be no turning back from this tech reality of the future, there must be even more rigorous debates on how to mitigate the risks of AI so that positive outcomes can be maximized while the obvious dangers AI poses are minimized.
And these debates safeguarding society from the potential dangers of AI must not just come from the tech giants creating AI, but every sector of society that AI will penetrate. Politicians, labor leaders, middle and small business owners, academics, workers of all types from professionals to blue-collar, religious leaders – everyone should be debating AI because AI will impact everyone.
Pope Francis recently released a document advocating for human responsibility to grow in proportion to technology, and that the impact of AI’s uses in various sectors “may not always be predictable from their inception.” Additionally, “AI should be used only as a tool to complement human intelligence, rather than replace its richness,” the document said in its conclusion.
And let us remember what the late physicist and futurist Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Having a conversation with Gen Z young adults, one of their top concerns is picking a major that they will not regret having their career replaced by AI in 20-30 years. Healthcare is always the safe bet, that’s been the conventional thinking the last 30 years in colleges across the U.S. But will it really be with AI enhanced medical care such as diagnosing illnesses and AI MRI readings to start? In every industry, AI is emerging.
How to mitigate risks of AI
First and foremost, it’s obvious that there needs to be government regulation in AI guiding and managing the potential dangers. Like with other massive networks that impact every person in society like our monetary system, regulatory bodies must set appropriate legal parameters for AI.
Second, just as computer and digital literacy became institutionalized in education, so too must be made available comprehensive AI education. AI education will not only help us be better equipped to survive and thrive in the AI world but keep us safer from the pitfalls of AI from deep fakes, misinformation, identification thefts, ad infintum.
Risks of AI
Massive job replacement. Tech giants financially invested and pushing the boundaries of AI praise the opportunities ahead for AI. But there are some in the industry who also warn of AI’s sweeping influence. Mustafa Suleyman, a key figure in AI who cofounded DeepMind and now heads Microsoft AI, said in his book the Coming Wave: Technology, Power and the Twenty-first Century’s Greatest Dilemma, the idea that AI will only “assist” workers for the long haul is a myth. Sure, it might make people more efficient initially, but AI is ultimately designed to replace labor. Jobs in administration, customer service and even creative fields like content creation are already seeing this shift. He makes it clear that these aren’t hypothetical changes – they’re happening now.
He cites a 2023 McKinsey report that estimates about 50% of all work activities could be automated by 2030. Up to 400 million workers may lose jobs due to technological advances by 2030.
It’s not just job replacement that poses a threat. Remember the 1983 movie, the techno-thriller WarGames in which a computer was poised via simulation to initiate a nuclear war. Right now, there is an AI-led weapons system called Lethal Autonomous Weapons Systems (LAWS) that operates independently of human control and human judgement. LAWS are not nuclear for now, but the technology of using computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy a target without manual human control of the system is already in place. Autonomous weapons can respond quickly to threats, which might trigger a chain reaction of escalating conflict that we do not want.
There are AI issues with social surveillance, cyber theft, social fracturing through social media algorithms, and on and on, which begs the question, “has a pandora’s box been opened?” Only time will tell. To be on the safe side, it’s prudent that we now begin serious debates on what we want and don’t want for our future in AI and not have the tech profiteers dictate a destiny for us.
+ There are no comments
Add yours