In recent months, excitement surrounding DeepSeek and the triumph of its innovative open-source R1 AI model has been hard to ignore. This model, which has outperformed OpenAI’s o1 in areas like math, science, and coding, has made quite a splash. So much so, that it’s become the most downloaded free app in the U.S., overtaking ChatGPT. The ripple effect of this buzz has led to significant declines in the stock prices of major players like Microsoft, Meta, and NVIDIA.
Despite the excitement, there are growing security worries tied to DeepSeek’s advancements. Meta’s lead AI scientist, Yann LeCun, attributes DeepSeek’s success to its open-source approach. Yet, this very openness has its downsides. Recently, DeepSeek had to put a temporary lid on new user registrations, citing “large-scale malicious attacks” on their services. Thankfully for existing users, they’re still able to use the app without any hitches.
The industry has been abuzz with praise for DeepSeek’s AI achievement, especially since it’s surpassed proprietary models. However, some critics are downplaying the accomplishment, pointing out that the software’s open-source nature means anyone can access and modify its code for free. DeepSeek’s open-source V3 model, at the heart of the app, is the real game-changer here.
Building this model didn’t break the bank either—training came in at about $6 million. This figure contrasts starkly with what’s typically spent on flagship models, showing how the scaling laws have hit a wall. The challenge is finding high-quality content crucial for training next-gen AI models.
The excitement surrounding DeepSeek has coincided with OpenAI and SoftBank revealing their ambitious $500 billion Stargate Project. This effort is aimed at enhancing AI infrastructure across the U.S., with former President Donald J. Trump labeling it the largest AI infrastructure initiative ever, meant to “secure the future of technology” stateside.
In light of all this, DeepSeek’s commitment to OpenAI’s foundational goal of creating AI that benefits everyone, without a price tag, is commendable. However, the recent cyberattack incidents have raised alarms for the Chinese startup. OpenAI CEO Sam Altman might have had a point when he suggested keeping advanced AI models closed-source as a simpler route to meeting safety standards.