RESOURCE

Foundation Models in AI and Automation

The chances are high that you’ve recently read an article on the internet that was created entirely by artificial intelligence (AI). Foundation models are a recent addition to the universe that receives training at a colossal scale. Machine learning (ML) systems like Google’s Parameter AI Language Model ( PaLM) are getting better at millions of tasks, including understanding the myriad meanings behind common human languages. 

Artificial Intelligence

They open up the possibility for more companies to rely on AI to produce creative work and complete more complex tasks that are currently beyond the capabilities of other AI systems. Let’s explore Them and their potential impact. 

 

What Are Foundation Models?

They are AI systems that are trained on large data pools that help them become adept at handling new problems. They’re currently used in applications that do everything from  protein sequence research to automation to  coding. Foundation models form the core of the ecosystem of various software applications. Once they’re in place, developers can add other capabilities that enhance an application’s performance.

While the scope of use for foundation models has expanded over the past few years, the technology supporting them is not new. Foundation models are based on self-supervised learning and deep neural networks, which have existed for decades. The automation they enable can be used in a wide array of industries.

 

How Do Foundation Models Work?

They use deep neural networks, configurations of artificial neurons that are loosely designed to mimic the functions of the human brain. They require computer power and sophisticated mathematics, and they’re capable of sophisticated pattern matching. 

There are  five key properties to foundation models: 

  • Expressivity: Foundation models can add flexibility to the capturing and rendering of information.
  • Scalability: Foundation models can expand as needed and store vast amounts of information for training. 
  • Multimodality: Foundation models bring together various domains and modalities. 
  • Memory capacity: Foundation models retain large amounts of accumulated knowledge.
  • Compositionality: Foundation models combine information to learn the meaning behind data. 

The excitement around foundation models emerges from the way they can learn beyond the boundaries of the data provided. One prominent model, called  DALL-E 2, turns text into high-quality images. It’s also possible to use the foundation model to create images based on a caption or edit images based on written instructions. Google’s PaLM prints out explanations that demonstrate its ability to interpret metaphors and jokes, an ability PaLM developed beyond its original training data.

 

How Do Foundation Models Differ From Other Models?

One big difference between foundation models and other AI models is the fact that scientists don’t construct their behavior. Instead, it is induced based on the information that the system is fed. There has also been a push to bring together different methodologies to make it easier to use them to enable automation using a wide array of applications. 

Another advantage that foundation models bring over other technologies like ML and deep learning is the ability to transfer knowledge between different tasks. For example, an application developed to recognize image objects could pass that information on to a different task and apply that expertise differently — for example, by automatically distinguishing activities within videos. 

The scale that can be achieved with foundation models makes them more powerful. The increased capacity is made possible because of advancements in computer hardware and the emergence of the  transformer architecture. Two factors make the framework an ideal fit as the backbone of foundation models:

  1. The recurring neural networks use parallel computations versus having to wait for them to occur in sequence.
  2. There are fewer implicit biases in the transformer architecture, which makes training more universal.

Because of that, companies can use ML algorithms to learn on a scale that allows them to adapt to a wide variety of functions. 

 

What Are the Current Limitations of Foundation Models?

They require large amounts of resources to continue their training. Some estimates put the cost of training  GPT-3, OpenAI’s language model,  at close to $5 million. For that reason, most of the development of foundation models remains in the realm of large companies like Google.

Limitations inflicted by the cost make it harder for independent researchers to validate the integrity of these systems and report the results to the rest of the AI community. There is also the possibility of having the control of the technology behind powerful models placed in the hands of a few large corporations.  

Another factor currently hindering the widespread use of foundation models is questions about their reliability. Many models source their information from the internet without limitation. Some scientists remain skeptical about the ability of foundation models to interpret language. 

Some worry about using one foundation model as a base for other applications. If there is a flaw within the system, that affects the other programs using it as a source. From their perspective, it would be better to have a network of small, specialized neural networks supporting applications to minimize the impact of a potential crash. 

 

The Future Implications of Foundation Models

One widely used foundation model is called Bidirectional Encoder Representations From Transformers (BERT). Google relies on BERT to  interpret the context behind search queries entered by users in more than 70 languages. It’s made available to anyone looking to train a foundation model from scratch. 

Self-supervised models like BERT can be fine-tuned to automate smaller downstream tasks, opening up possibilities for their use in other settings. They could be useful in health care settings, helping to deliver quality care to patients. They could also help make drug discovery more efficient and offer help in interpreting medical records. 

Many believe that they need some type of grounding to achieve their full potential, including for automation purposes. That means incorporating other types of AI. PIGLeT is an example of that approach. This system uses a pretrained language model that is supported by a physical dynamics neural model.

Researchers also need to incorporate feedback taken from diverse groups of people instead of focusing on a small group of specialists. Factions should include:

  • Developers
  • Designers
  • Students
  • Malicious actors
  • Domain experts
  • Less represented populations

More diverse research around foundations can only benefit efforts to expand the use of the technology. Foundation models tend to be compatible with other research areas, like causal networks or probabilistic programs, opening the door to potential breakthroughs in countless areas. It’s expected that the debate will continue on the ability of foundation models to transform how we use AI in today’s society.

 

Find Top Talent in AI and Machine Learning

Looking for someone who’s capable of taking your AI solutions to the next level? Reach out to Ghost Mountain to handle your recruiting needs.

Subscribe to our articles to stay informed

If you Enjoyed This Article:

Flexible Modeling in Excel

Capital budgeting is one of the most essential functions of any financial executive.

The CTO and the CIO

What is a CTO? How is it different from a CIO?

The Great Resignation and Inflation

Is inflation worsening the great resignation?

Trend Vs. Zero Based Budgeting

Budgeting is an opportunity to review the performance of your assumptions

Philanthropy

Giving back to the community is a great way to reinforce company culture

The Three Controllers and When you need them

Book Keeper, Controller, and Corporate Controller
Have Questions?

Book a free consultation