Introduction
In an era of rapid progress in the field of artificial intelligence, technologies are becoming more complex and powerful, which opens up both new opportunities and potential threats. Recently, OpenAI presented its latest artificial intelligence model o1, which has advanced analytical and thinking abilities. At the same time, the startup drew attention to the risk of unauthorized use of the model for the development of biological weapons. This risk is classified as “medium”, which is the highest level of danger that OpenAI has ever assigned to its models. This approach emphasizes both the technological power and the responsibility of the company when introducing new products to the market.
Risk assessment
In its system map, a document that explains the key functions and potential threats of the model, OpenAI has designated the likelihood of unauthorized use of o1 to create biological weapons as a “medium risk”. The company emphasizes that this risk applies only to highly qualified specialists in biotechnological and laboratory work. Although o1 is capable of solving complex problems in biology and chemistry, AI does not yet have the skills to perform laboratory experiments on its own. However, such a security risk stands out against the background of previous models: for example, in the GPT-4o system card, which was released in August, the probability of use in the biological field was assessed as “low”.
OpenAI clarifies that biological weapons were chosen as a priority risk in the assessment, since their development has a lower threshold for potential abuse compared to chemical, radiological or nuclear weapons. This highlights the urgency of the problem and calls for caution when using such technologies.
Features and achievements of the o1 model
The o1 model significantly surpasses its predecessors in a number of areas, solving multi-stage tasks taking into account all the details and, as it were, “thinking over” the answer. She is close to human thinking: instead of an instant response, she goes through a chain of mental steps, which helps her solve more complex tasks. In the qualification exam of the International Mathematical Olympiad (IMO), o1 solved 83% of the problems, while GPT-4o solved only 13%. This demonstrates significant progress in the field of information processing and analysis.
In addition to outstanding results in mathematics, o1 also successfully copes with tasks in physics, chemistry and biology at the graduate level. OpenAI believes that this ability can open up new opportunities for the application of artificial intelligence in science, research and education.
Accessibility and investment
At the moment, OpenAI has provided access to o1 to ChatGPT users with Plus and Team subscriptions, and next week the model will be available to Enterprise and Edu subscribers. The company also plans to open access to o1 for free users in the future, but the timing has not yet been announced. This increased accessibility of the model highlights OpenAI’s interest in applying new technologies to a wide audience and preparing users to use more powerful tools.
Meanwhile, according to information from Bloomberg, OpenAI is in talks to attract investments worth $6.5 billion, which could increase the company’s valuation to more than $150 billion. This will make OpenAI one of the most valuable startups in the world, along with giants like ByteDance and SpaceX. Potential investors include Microsoft, Nvidia, Apple and Thrive Capital.
Conclusion
The introduction of the o1 model demonstrates breakthrough achievements in the development of artificial intelligence, but at the same time points to the importance of a responsible approach to security. By assigning the o1 model an “average” risk level, OpenAI identified the need to control the use of such technologies. Amid the rapid growth of investor interest in the company, the issue of the safe use of AI is becoming increasingly relevant, and OpenAI intends to develop innovations, paying special attention to their impact on security and the public good.
MADE WITH HELP OF: ChatGPT4
MORE: https://www.forbes.ru/tekhnologii/521200-openai-zaavila-o-povysennom-riske-primenenia-ee-ii-modeli-dla-sozdania-biooruzia (in Russian)
https://www.ft.com/content/37ba7236-2a64-4807-b1e1-7e21ee7d0914
https://openai.com/index/building-an-early-warning-system-for-llm-aided-biological-threat-creation/
https://openai.com/index/openai-o1-system-card/
https://decrypt.co/250568/opena-new-ai-steps-towards-biological-weapons-risks-warns-senate