Microsoft and Google have made an important change in computing by bringing AI to people through their search engines. They recently announced next-generation AI-powered search engines that can provide more comprehensive and detailed answers to user queries. Microsoft is incorporating AI into Bing for text queries, while Google has plans to use AI for text, image, and video searches. Both companies realize that strong hardware infrastructure is essential for this technology, but they still need to provide details about the actual hardware being used.

Microsoft and Google have been working on AI hardware for years, and the speed and accuracy of search results will be a test of the viability of the search engines. Google's Bard uses its TPU (Tensor Processing Unit) chips, while Microsoft's AI supercomputer in Azure, which likely runs on GPUs, can deliver results in milliseconds. This sets up a battle in AI computing between Google's TPUs and Nvidia's GPUs.

Google and Microsoft Set up AI Hardware Battle with Next-Generation Search / Google Advance version of Artificial intelligence (AI)
Google's Advance version of Artificial intelligence (AI)

Microsoft is using a more advanced version of OpenAI's ChatGPT, which can handle the 10 billion daily search queries. Microsoft's AI journey began with ensuring it had the computing capacity, which the company claims are among the five fastest supercomputers in the world.

Microsoft and Google have invested years and significant resources in developing and configuring a complex set of distributed resources to help load balance, optimize performance, and scale. AI-powered search engines require robust hardware infrastructure, and the battle between Google's TPUs and Nvidia's GPUs will determine which company will dominate the market.

Google and Microsoft Set up AI Hardware Battle with Next-Generation Search / Google Advance version of Artificial intelligence (AI)

As more companies use artificial intelligence (AI) for search, the cost of computing for AI at the supercomputer level is expected to decrease over time. Microsoft's AI supercomputer has 285,000 CPU cores and 10,000 GPUs, while Google's Bard conversational AI is a lighter version of its LaMDA large-language model. Microsoft and Google are still working on building infrastructure to handle AI search, and both companies need to figure out how to handle peak times when search requests increase. Google's TPUs have been critical components of the company's AI strategy, and Facebook is building data centers with the capacity for more AI computing. Generally, data centers are being constructed for targeted workloads, which increasingly revolve around AI applications and feature more GPU and CPU content.

                                                                                                                  how do we use ChatGPT plus?

The article discusses the cost and infrastructure considerations involved in implementing artificial intelligence (AI) in search engines such as Microsoft's Bing and Google Search. The cost of computing for AI at the supercomputer level is expected to come down over time as usage scales and optimizations are implemented, which will make it more accessible to a larger user base. However, implementing AI requires a different approach to computing, as it relies on hardware that can carry out matrix multiplication, unlike conventional computing, which revolves around CPUs.

Google is taking a cautious approach and releasing a lightweight version of its conversational AI, Bard, as opposed to its larger LaMDA model, which competes with OpenAI's GPT-3. Microsoft, on the other hand, is open to testing and using new AI hardware and is focusing more on having a flexible computing platform than optimizing for specific tasks.

Datacenters are being built specifically to support AI workloads, with a focus on targeted workloads and more GPU and CPU content. Cloud providers go through lengthy evaluation cycles before picking the best CPUs, GPUs, and other components, with the total cost of ownership being a consideration. In addition, some buyers may want to commit only a little to a particular workload, which can affect the flexibility of data centers.

                                                                                                                           Price of ChatGPT plus