By Jim Davis, founder and principal analyst at Edge Research Group
There’s a challenge to predicting developments in edge computing in 2025. It’s like the joke about the weather in San Francisco: If you don’t like it, wait a minute. Recent developments in the industry hint at a broader transformation in edge computing , where optimization and efficiency matter more because raw computing power is usually constrained.
Take AI and large-language models, for example. Assumptions about model development are being challenged by Chinese startup DeepSeek, which sent shockwaves through the tech industry and stock markets with the sudden popularity of its chatbot and DeepSeek-R3 large-language model (LLM).
The quality of responses and performance of DeepSeek approaches or exceeds that of models from companies like Open AI in some benchmarks. However, what is roiling Silicon Valley and Wall Street is the company’s claim of developing its latest model for just $6 million.
The claim deserves scrutiny given the unclear nature of support from the Chinese government and whether development costs included the cost of all the chips experts suggest are required to train LLM models. Still, DeepSeek’s emergence highlights a crucial shift in how AI models might be developed and deployed, especially at the edge.
The stock market’s dramatic reaction to DeepSeek’s announcement, including Nvidia’s 17% stock price decline on January 27, signals more than just investor jitters. It represents a growing recognition that AI development may not necessarily demand the massive computational resources and budgets that have dominated the narrative. This potential paradigm shift has implications for edge AI , where computational efficiency and cost-effectiveness are paramount. As organizations seek to process more data at the edge rather than in centralized clouds, architectural approaches — including the “ mixture of experts ” approach used by DeepSeek — could provide a blueprint for developing more resource-conscious generative AI models. 2025 is the year of Edge AI
Beyond the hype and market volatility, the reality is that a lot of development work on edge AI has been ongoing for years.
As we navigate through 2025, EdgeIR will keep an eye on what’s possible in edge […]

Microdosing 101
Kate Schroeder LPC, LMHC, NCC Key points Microdosing should be