Remove APIs Remove Benchmark Remove Conference Remove Examples
article thumbnail

Build a multilingual automatic translation pipeline with Amazon Translate Active Custom Translation

AWS Machine Learning

It features interactive Jupyter notebooks with self-contained code in PyTorch, JAX, TensorFlow, and MXNet, as well as real-world examples, exposition figures, and math. ACT allows you to customize translation output on the fly by providing tailored translation examples in the form of parallel data.

APIs 79
article thumbnail

Build well-architected IDP solutions with a custom lens – Part 4: Performance efficiency

AWS Machine Learning

You can save time, money, and labor by implementing classifications in your workflow, and documents go to downstream applications and APIs based on document type. For example, by storing a document in the different processed phases, you can reverse processing to the previous step if needed.

APIs 98
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Benchmark and optimize endpoint deployment in Amazon SageMaker JumpStart 

AWS Machine Learning

This post explores these relationships via a comprehensive benchmarking of LLMs available in Amazon SageMaker JumpStart, including Llama 2, Falcon, and Mistral variants. We provide theoretical principles on how accelerator specifications impact LLM benchmarking. Additionally, models are fully sharded on the supported instance.

Benchmark 100
article thumbnail

Gemma is now available in Amazon SageMaker JumpStart 

AWS Machine Learning

You will also find a Deploy button, which takes you to a landing page where you can test inference with an example payload. Deploy Gemma with SageMaker Python SDK You can find the code showing the deployment of Gemma on JumpStart and an example of how to use the deployed model in this GitHub notebook. This looks pretty good!

Benchmark 112
article thumbnail

Mixtral 8x22B is now available in Amazon SageMaker JumpStart

AWS Machine Learning

What is Mixtral 8x22B Mixtral 8x22B is Mistral AI’s latest open-weights model and sets a new standard for performance and efficiency of available foundation models , as measured by Mistral AI across standard industry benchmarks. In this section, we provide example prompts. Deploy a model Deployment starts when you choose Deploy.

APIs 106
article thumbnail

Best practices to build generative AI applications on AWS

AWS Machine Learning

By the end, you will have solid guidelines and a helpful flow chart for determining the best method to develop your own FM-powered applications, grounded in real-life examples. The following screenshot shows an example of a zero-shot prompt with the Anthropic Claude 2.1 In these instructions, we didn’t provide any examples.

article thumbnail

Image classification model selection using Amazon SageMaker JumpStart

AWS Machine Learning

The former question addresses model selection across model architectures, while the latter question concerns benchmarking trained models against a test dataset. This post provides details on how to implement large-scale Amazon SageMaker benchmarking and model selection tasks. swin-large-patch4-window7-224 195.4M efficientnet-b5 29.0M

APIs 75