Recently, the CEO of OpenAI will meet with the Republican House of Representatives on Capitol Hill. OpenAI has established a ChatGPT supported by Microsoft and is facing pressure from AI regulation. The US Congress has been discussing regulations related to AI. The Democratic President Biden administration has been urging the development of artificial intelligence regulations, but due to the bipartisan split in the US Congress, progress in passing the relevant regulations has been slow.
Senior officials from US law enforcement and intelligence agencies said on the 9th that advances in artificial intelligence may lower the technological barriers required to carry out hacker attacks, fraud, and money laundering, thereby facilitating such crimes. According to a research report, the impact of generative artificial intelligence on the US economy is expected to reach a staggering $1 trillion in the next 10 years, but this may come at a cost to workers.
With the rapid development of artificial intelligence technology, it is now difficult for us to distinguish whether the video is a real person or a "digital person" generated or forged using AI technology, and the risks brought about by this continue to spread. This year, nearly half of the population on Earth will participate in the election, and deep AI forgery may bring huge risks to the election, which regulatory agencies are also concerned about.
US high-ranking official: AI is fueling cybercrime
Robert Joyce, Director of Cybersecurity at the National Security Agency of the United States, stated at an international cybersecurity conference held at Fordham University that those who may not have the relevant technical capabilities themselves may now use AI guidance to carry out hacking operations and complete intrusion operations that they had previously been unable to complete. Joyce said that this will make criminals who use artificial intelligence more efficient and dangerous. However, he also added that from another perspective, the progress of AI can also help US authorities detect malicious activities.
James Smith, Assistant Director of the New York Bureau of the Federal Bureau of Investigation, also stated at the meeting that the FBI has observed an increase in cyber intrusions due to artificial intelligence lowering the technical barriers to implementing them. At the international cybersecurity conference, two senior US federal prosecutors also pointed out that artificial intelligence may stimulate the proliferation of some financial crimes.
Manhattan prosecutor Damian Williams stated that artificial intelligence can help people who do not speak English, generate credible information, and attempt to defraud potential victims of their money. Brooklyn prosecutor Brian Piss pointed out that artificial intelligence generated "deeply forged" images and videos can be used to deceive banks' security systems aimed at verifying customer identities to prevent money laundering. Pis said, "This in turn will allow criminals and terrorists to open accounts on a large scale, disrupting the control systems we have developed over the past few decades."
In fact, over the past year, with the improvement of people's understanding and widespread application of AI technology, cases of AI technology being used by criminals are becoming increasingly frequent, which has attracted high attention from many countries around the world. AI criminal methods are often more sophisticated and difficult to identify than traditional methods, thereby increasing the difficulty of combating crime. The emergence and popularization of generative AI is also making the situation of network security more complex.
The impact of AI on the US economy in the next 10 years will reach trillions of dollars
The report states that this technology research warns that about 90% of jobs will be affected during this process. The CEO of Oxford Institute of Economics said in a statement, "The research findings demonstrate how quickly this technology may disrupt the trajectory of the US economy, providing valuable insights for leaders to harness its potential and adapt quickly."
According to research, generative artificial intelligence has the potential to improve operational efficiency, create new sources of revenue, innovate products and services, and ultimately redefine business. This economic model examines 18000 tasks that drive the US economy. According to the report, based on the adoption rate of enterprises, the first generation of artificial intelligence can increase productivity in the United States by 1.7% to 3.5% and increase the annual value of the country's gross domestic product to $477 billion to $1 trillion over the next 10 years. The report shows that research has concluded that 52% of job positions are facing significant changes, approximately 9% of the existing US workforce may be laid off, and 1% of people may find it difficult to find new jobs.
As the election approaches, deep AI forgery raises concerns
FBI Director Christopher Ray pointed out at a meeting on the 9th that information warfare, false information, and misinformation have existed for decades and this risk is not new, but AI has upgraded this weapon. It adds that AI makes this information weapon more effective, allowing people to create more credible fake characters, more complex false information, and create evidence that is more difficult to identify as false.
Experts say that deep forgery is a form of synthetic media that uses AI to create seemingly real videos or images. At present, AI deep forgery technology has been used in various scenarios such as film and television production, but it has also raised concerns about its potential misuse for malicious purposes, such as spreading erroneous information and manipulating public opinion. When maliciously used, false images, portraits, and sounds can easily mislead voters and amplify bias, which is also a problem that FBI and US Cyberspace Command Commander and National Security Agency Director Paul Nakasone is trying to solve. Zhong Zenggen stated that the election security team of the US government is currently studying what has happened in the past, as well as what is currently happening in the online field, to identify potential threats and how to address these risks.
Regulatory risks are becoming increasingly prominent
The application of generative artificial intelligence technology not only drives economic growth and facilitates daily life, but also triggers profound thinking about its regulatory issues, and even concerns about the technology losing control. Industry insiders are concerned that the black box of artificial intelligence algorithms and algorithmic biases will seriously affect social equity, exacerbate the "information cocoon" phenomenon, and further divide the already polarized American society. A public opinion survey shows that most adults from both parties in the United States are concerned that artificial intelligence will "increase the spread of false information" in the 2024 election.
The widespread application of artificial intelligence has also intensified concerns about large-scale unemployment and wealth inequality. A survey conducted by the Organization for Economic Cooperation and Development on 2000 corporate employees showed that three-fifths of respondents were concerned about being completely unemployed due to artificial intelligence in the next 10 years. Some practitioners are also concerned that the uncontrolled development of artificial intelligence will threaten human survival. Hundreds of executives, experts, and scholars in the artificial intelligence industry have issued a joint statement this year through the non-profit organization American Artificial Intelligence Security Center, calling for "reducing the potential risk of human extinction brought about by artificial intelligence, alongside issues such as the pandemic and nuclear war, to be promoted as a global priority.".
Industry insiders point out that in the current situation where governance frameworks and control measures have not kept up, it is crucial for regulatory agencies to formulate binding regulations. Technology companies are also calling on the US government and legislative bodies to take action as soon as possible, but compared to the development speed of artificial intelligence, the relevant legislative process in the US is significantly lagging behind. From this, it can be seen that the field of artificial intelligence in the United States is advancing rapidly in technology in 2023, but there is still a need to strengthen technological risk management. How to ensure that technology can safely and reliably serve humanity without losing control should be an important issue in the new year.