What exactly is generative AI (artificial intelligence)? And how is it different than the AI we know?
Let’s start with traditional AI, which has the ability to perform specific tasks more intelligently. How? It excels at ingesting vast data sets, identifying salient patterns, then making decisions or predictions based on that data. These include applications like suggesting what movie to watch next on a streaming service, customer service chatbots and credit card fraud prediction/protection.
In the early 2020s, progress in transformer-driven deep neural networks paved the way for generative AI platforms, including ChatGPT™, Bing Chat™, Bard™, LLaMA™, and DALL-E™. These technologies are unique — they also learn patterns from the input training data but have the additional capability to generate new data with similar characteristics as the training set. (And they are good — that last sentence was written with Bard.)
This “generation” is what sets it apart. As a recent Forbes article described it, “It's like an imaginative friend who can come up with original, creative content.”
Generative AI’s outputs can take a variety of forms, including text, images, music and even computer code. Generative AI is already being used across a wide variety of industries, including art, writing, software development, product design, healthcare, finance, gaming, marketing and fashion.
Large language model training (LLM training) is what makes some of today’s most popular generative AI tools possible, including ChatGPT™. LLM training is designed to interpret, understand and return natural-sounding text-based responses. The more comprehensive the data set used for large language model training, the more ‘naturalistic’ the result.
McKinsey predicts that generative AI “could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed. By comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. This estimate would roughly double if we include the impact of embedding generative AI into software that is currently used for other tasks beyond those use cases.”
The applications are almost limitless. In fact, many big businesses have recognized these applications and are already putting generative AI to use.
Smart manufacturing
According to DeLoitte, 86 percent of manufacturers believe that smart factories will be the main driver of competition within the next two years. Already, there are over 15 billion connected IoT devices — and you can expect twice as many — over 29 billion — by 2030. Big machines using big data are transforming the industrial market, relying on complex generative AI workloads to manage the rapidly growing sensor data.
At Micron, we not only supply critical generative AI and LLM memory and storage solutions, we also leverage AI in our own manufacturing processes. Silicon manufacturing is an extremely complex process, taking months and involving some 1,500 steps. Micron employs sophisticated AI in every step of this process, dramatically improving accuracy and productivity. The benefits are many, including higher output, yield, quality, a safer working environment, improved efficiencies and a sustainable business.
Automotive
Generative AI is transforming the automotive industry with accelerated prototyping, where designers create simple sketches, and the system generates detailed 3D models. These models are refined iteratively, incorporating external market trends, aerodynamic efficiency data, crash and ergonomic simulations, and emerging styles.
Generative AI also has the potential to pave the way for the safe rollout of autonomous vehicles, without putting the public at risk while the technology matures. Since generative AI can generate images and videos to build real-world scenarios, autonomous vehicles can learn and adapt to different environments within a controlled setting. That means less expensive field testing and more intuitive algorithms to train autonomous vehicle decision-making models.
On the production side, generative AI optimizes for material distribution, reduced waste and assembly processes and component designs that are easier and more cost-effective to manufacture.
Science
Generative AI is heavily impacting scientific discovery, transforming everything from creative content, to synthetic data, to generative engineering and design.
In fact, Gartner predicts that “by 2025, more than 30% of new drugs and materials will be systematically discovered using generative AI techniques, up from zero today. Generative AI looks promising for the pharmaceutical industry, given the opportunity to reduce costs and time in drug discovery.”
McKinsey analyzed 63 use cases and predicts that customer operations, marketing and sales, software engineering and R&D across all verticals will be the most heavily affected by generative AI and the LLM training upon which many AI tools are based.
What’s Next?
While there are justifiable concerns about the potential misuse of generative AI, including intellectual property infringement, cybercrime and deepfakes, the possibilities for good are overwhelming.
Micron’s own Eric Booth — a cloud senior business development manager — is in a doctoral program at Boise State University researching ways that the technology can help children with speech disabilities.
“In speech therapy, we used to think that the therapist would give the student content to read and then a tool would score how well they did in pronunciation and enunciation,” explains Eric. “But with generative AI, the tool can actually handle the whole process. It excels in identifying patterns, so it can tell if a student is, for instance, consistently mispronouncing their Os.”
Until recently, speech recognition meant you needed a big server with lots of memory, and everything had to go to the cloud. Now, speech recognition is built into your phone. The compute has gotten faster, the memory has gotten faster, and a former data center process is now on your phone or other endpoint device.
Soon generative AI processes based on large language model training will be on your phone. Because the LLM training process for AI models is not just about making more complex models but also simplifying them to work in endpoint devices such as your phone or PC. As these large language models grow, it’s not possible to do the training outside of a cloud environment. But, once LLM training is complete, and then simplified, it can move to the endpoint device.
Then the power of generative AI is literally in your hand, as a tool, as a companion to help in day to day life. The future virtual assistant is likely to become your personal AI companion that can grow and adapt with you, learning from your experience and the data that you generate to better predict and understand your personal preferences.
Imagine this companion with you from the very beginning. An AI companion that grows alongside you, evolving with every step of your journey, enriching your life at every stage.
As a baby, your AI companion could help nurture your curious mind, could read you stories, play educational games, and spark your imagination, and as you grow, it can follow you from device to device, becoming more intelligent with every passing moment, just like you. It can guide you through your educational journey, adapting to your unique learning style. It can help you excel by learning how you best absorb information, and adjusting its methods, presenting concepts in ways that resonate with you, making your education more effective and enjoyable. As a coach, your companion leverages instructional improvements to help you make informed decisions to forge your path through life.
Even as an adult, your AI companion will optimize your schedule and daily tasks, streamlining your workflow and boosting your productivity. The data that you generate every day is used by your AI devices to continually refine and hone its skills. This type of technology and experience will be driven by generative AI or some not-yet-invented derivative AI methodology.
Whether it’s manufacturing, automotive, science or other applications, generative AI and its derivatives will shape the future in ways we can’t imagine — and Micron is at the heart of the AI data driving the devices on your wrist, in your hand, and in the cloud.
Generative AI needs to access and absorb enormous amounts of data all at once and draw from vast stores of memory to determine proper responses. This requires Micron technologies like HBM3E, high density DDR5 DRAM, and multi-terabyte SSD storage, all of which enable the speed and capacity required for generative AI training and inference in the cloud. For endpoint devices like mobile phones, striking a balance of power efficiency and performance is key for AI-driven user experiences. Micron LPDDR5X offers the speed and bandwidth needed to have powerful generative AI at hand.
The capabilities of generative AI have advanced rapidly and the use cases for good are still in development, but it is easy to see that it has the potential to change our day-to-day lives. Micron’s vision is that this technology will truly enrich the lives of all.