AWS Machine Learning Blog

Fine-tune and Deploy Mistral 7B with Amazon SageMaker JumpStart

Today, we are excited to announce the capability to fine-tune the Mistral 7B model using Amazon SageMaker JumpStart. You can now fine-tune and deploy Mistral text generation models on SageMaker JumpStart using the Amazon SageMaker Studio UI with a few clicks or using the SageMaker Python SDK.

Foundation models perform very well with generative tasks, from crafting text and summaries, answering questions, to producing images and videos. Despite the great generalization capabilities of these models, there are often use cases that have very specific domain data (such as healthcare or financial services), and these models may not be able to provide good results for these use cases. This results in a need for further fine-tuning of these generative AI models over the use case-specific and domain-specific data.

In this post, we demonstrate how to fine-tune the Mistral 7B model using SageMaker JumpStart.

What is Mistral 7B

Mistral 7B is a foundation model developed by Mistral AI, supporting English text and code generation abilities. It supports a variety of use cases, such as text summarization, classification, text completion, and code completion. To demonstrate the customizability of the model, Mistral AI has also released a Mistral 7B-Instruct model for chat use cases, fine-tuned using a variety of publicly available conversation datasets.

Mistral 7B is a transformer model and uses grouped query attention and sliding window attention to achieve faster inference (low latency) and handle longer sequences. Grouped query attention is an architecture that combines multi-query and multi-head attention to achieve output quality close to multi-head attention and comparable speed to multi-query attention. The sliding window attention method uses the multiple levels of a transformer model to focus on information that came earlier, which helps the model understand a longer stretch of context. . Mistral 7B has an 8,000-token context length, demonstrates low latency and high throughput, and has strong performance when compared to larger model alternatives, providing low memory requirements at a 7B model size. The model is made available under the permissive Apache 2.0 license, for use without restrictions.

You can fine-tune the models using either the SageMaker Studio UI or SageMaker Python SDK. We discuss both methods in this post.

Fine-tune via the SageMaker Studio UI

In SageMaker Studio, you can access the Mistral model via SageMaker JumpStart under Models, notebooks, and solutions, as shown in the following screenshot.

If you don’t see Mistral models, update your SageMaker Studio version by shutting down and restarting. For more information about version updates, refer to Shut down and Update Studio Apps.

On the model page, you can point to the Amazon Simple Storage Service (Amazon S3) bucket containing the training and validation datasets for fine-tuning. In addition, you can configure deployment configuration, hyperparameters, and security settings for fine-tuning. You can then choose Train to start the training job on a SageMaker ML instance.

Deploy the model

After the model is fine-tuned, you can deploy it using the model page on SageMaker JumpStart. The option to deploy the fine-tuned model will appear when fine-tuning is complete, as shown in the following screenshot.

Fine-tune via the SageMaker Python SDK

You can also fine-tune Mistral models using the SageMaker Python SDK. The complete notebook is available on GitHub. In this section, we provide examples of two types of fine-tuning.

Instruction fine-tuning

Instruction tuning is a technique that involves fine-tuning a language model on a collection of natural language processing (NLP) tasks using instructions. In this technique, the model is trained to perform tasks by following textual instructions instead of specific datasets for each task. The model is fine-tuned with a set of input and output examples for each task, allowing the model to generalize to new tasks that it hasn’t been explicitly trained on as long as prompts are provided for the tasks. Instruction tuning helps improve the accuracy and effectiveness of models and is helpful in situations where large datasets aren’t available for specific tasks.

Let’s walk through the fine-tuning code provided in the example notebook with the SageMaker Python SDK.

We use a subset of the Dolly dataset in an instruction tuning format, and specify the template.json file describing the input and the output formats. The training data must be formatted in JSON lines (.jsonl) format, where each line is a dictionary representing a single data sample. In this case, we name it train.jsonl.

The following snippet is an example of train.jsonl. The keys instruction, context, and response in each sample should have corresponding entries {instruction}, {context}, {response} in the template.json.

{
    "instruction": "What is a dispersive prism?", 
    "context": "In optics, a dispersive prism is an optical prism that is used to disperse light, that is, to separate light into its spectral components (the colors of the rainbow). Different wavelengths (colors) of light will be deflected by the prism at different angles. This is a result of the prism material's index of refraction varying with wavelength (dispersion). Generally, longer wavelengths (red) undergo a smaller deviation than shorter wavelengths (blue). The dispersion of white light into colors by a prism led Sir Isaac Newton to conclude that white light consisted of a mixture of different colors.", 
    "response": "A dispersive prism is an optical prism that disperses the light's different wavelengths at different angles. When white light is shined through a dispersive prism it will separate into the different colors of the rainbow."
}

The following is a sample of template.json:

{
    "prompt": "Below is an instruction that describes a task, paired with an input that provides further context. "
    "Write a response that appropriately completes the request.\n\n"
    "### Instruction:\n{instruction}\n\n### Input:\n{context}\n\n",
    "completion": " {response}",
}

After you upload the prompt template and the training data to an S3 bucket, you can set the hyperparameters.

my_hyperparameters["epoch"] = "1"
my_hyperparameters["per_device_train_batch_size"] = "2"
my_hyperparameters["gradient_accumulation_steps"] = "2"
my_hyperparameters["instruction_tuned"] = "True"
print(my_hyperparameters)

You can then start the fine-tuning process and deploy the model to an inference endpoint. In the following code, we use an ml.g5.12xlarge instance:

from sagemaker.jumpstart.estimator import JumpStartEstimator

instruction_tuned_estimator = JumpStartEstimator(
    model_id=model_id,
    hyperparameters=my_hyperparameters,
    instance_type="ml.g5.12xlarge",
)
instruction_tuned_estimator.fit({"train": train_data_location}, logs=True)

instruction_tuned_predictor = instruction_tuned_estimator.deploy()

Domain adaptation fine-tuning

Domain adaptation fine-tuning is a process that refines a pre-trained LLM to better suit a specific domain or task. By using a smaller, domain-specific dataset, the LLM can be fine-tuned to understand and generate content that is more accurate, relevant, and insightful for that specific domain, while still retaining the vast knowledge it gained during its original training.

The Mistral model can be fine-tuned on any domain-specific dataset. After it’s fine-tuned, it’s expected to generate domain-specific text and solve various NLP tasks in that specific domain. For the training dataset, provide a train directory and an optional validation directory, each containing a single CSV, JSON, or TXT file. For CSV and JSON formats, use data from the text column or the first column if text isn’t present. Ensure only one file exists under each directory. For instance, input data may be SEC filings of Amazon as a text file:

This report includes estimates, projections, statements relating to our
business plans, objectives, and expected operating results that are “forward-
looking statements” within the meaning of the Private Securities Litigation
Reform Act of 1995, Section 27A of the Securities Act of 1933, and Section 21E
of the Securities Exchange Act of 1934. Forward-looking statements may appear
throughout this report, including the following sections: “Business” (Part I,
Item 1 of this Form 10-K), “Risk Factors” (Part I, Item 1A of this Form 10-K),
and “Management’s Discussion and Analysis of Financial Condition and Results
of Operations” (Part II, Item 7 of this Form 10-K). These forward-looking
statements generally are identified by the words “believe,” “project,”
“expect,” “anticipate,” “estimate,” “intend,” “strategy,” “future,”
“opportunity,” “plan,” “may,” “should,” “will,” “would,” “will be,” “will
continue,” “will likely result,” and similar expressions.

You can start domain adaptation fine-tuning by specifying the hyperparameter “instruction_tuned” as “False“. The rest of the steps are similar to the instruction fine-tuning steps.

We set max_input_length to be 2048 on g5.12xlarge. You can use a higher input length on a larger instance type. For details, see the example notebook.

Inference example with and without fine-tuning

The following table contains the results of the Mistral 7B model fine-tuned with SEC filing documents of Amazon from 2021–2022. We compare the output before and after fine-tuning.

Input Prompt Output Before Fine-Tuning Output After Fine-Tuning Parameters
This Form 10-K report shows that

the company has been in business for 10 years and has a total of 10 employees. The company has a total of 10 employees and is in the business of providing services to the public. The company has a total of 10 employees and is in the business of providing services to the public.

The company has a total of 10 employees and is in the business of providing services to the public. The company has a total of 10 employees and is in the business of providing services to the public. The company has a total of 10 employees and is in the business of providing services to the public. The company has a total of 10 employees and is in the business of providing

we had net sales of $1.2 billion, $1.5 billion, and $1.8 billion in 2004, 2003, and 2002. These amounts include sales from our international websites. Net sales from our international websites were $323 million, $242 million, and $150 million in 2004, 2003, and 2002. Net sales from our international websites represented 27%, 16%, and 8% of net sales in 2004, 2003, and 2002. “temperature”: 0.2,
“max_new_tokens”: 150,
We serve consumers through

our retail network of 1,700 stores, which includes 1,200 stores in the United States, 400 stores in Canada, and 100 stores in Mexico. We also serve consumers through our e-commerce websites.

## What is the biggest home improvement store?

The Home Depot is the largest home improvement retailer in the United States.

## What is the largest home improvement store in the world?

The largest home improvement store in the world is Home Depot.

## What is the biggest home improvement store in the

our online and physical stores and focus on selection, price, and convenience. We design our stores to enable hundreds of millions of unique products to be sold by us and by third parties across dozens of product categories. Customers access our offerings through our websites, mobile apps, Alexa, devices, streaming, and physically visiting our stores. We also manufacture and sell electronic devices, including Kindle, Fire tablet, Fire TV, Echo, and Ring, and we develop and produce media content. We seek to offer our customers low prices, fast and free delivery, easy-to-use functionality, and timely customer service. In addition, we offer subscription services such as Amazon Prime, a
Our vision is

to be the leading provider of quality, affordable, and accessible health care services in the communities we serve.

Our mission is to provide quality, affordable, and accessible health care services to the communities we serve.

Our values are:

– Integrity: We act with honesty and integrity in all our dealings.
– Respect: We treat everyone with respect and dignity.
– Excellence: We strive for excellence in all we do.
– Compassion: We care for our patients and each other with compassion.
– Teamwork: We work together as a team to achieve our goals.

Our commitment to our patients is to provide quality, affordable, and accessible health care services.

to be Earth’s most customer-centric company. We are guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. In each of our segments, we serve our primary customer sets, consisting of consumers, sellers, developers, enterprises, and content creators. In addition, we provide services, such as advertising. We have organized our operations into three segments: North America, International, and AWS. These segments reflect the way the Company evaluates its business performance and manages its operations. Information on our net sales is contained in Item 8 of Part II, “Financial Statements

As you can see, the fine-tuned model provides more specific information related to Amazon compared to the generic pre-trained one. This is because fine-tuning adapts the model to understand the nuances, patterns, and specifics of the provided dataset. By using a pre-trained model and tailoring it with fine-tuning, we ensure that you get the best of both worlds: the broad knowledge of the pre-trained model and the specialized accuracy for your unique dataset. One size may not fit all in the world of machine learning, and fine-tuning is the tailor-made solution you need!

Conclusion

In this post, we discussed fine-tuning the Mistral 7B model using SageMaker JumpStart. We showed how you can use the SageMaker JumpStart console in SageMaker Studio or the SageMaker Python SDK to fine-tune and deploy these models. As a next step, you can try fine-tuning these models on your own dataset using the code provided in the GitHub repository to test and benchmark the results for your use cases.


About the Authors

Xin HuangXin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A.

Vivek Gangasani is a AI/ML Startup Solutions Architect for Generative AI startups at AWS. He helps emerging GenAI startups build innovative solutions using AWS services and accelerated compute. Currently, he is focused on developing strategies for fine-tuning and optimizing the inference performance of Large Language Models. In his free time, Vivek enjoys hiking, watching movies and trying different cuisines.

Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana-Champaign. He is an active researcher in machine learning and statistical inference, and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.