Remove APIs Remove Best practices Remove Engineering Remove Metrics
article thumbnail

Generating value from enterprise data: Best practices for Text2SQL and generative AI

AWS Machine Learning

In this post, we provide an introduction to text to SQL (Text2SQL) and explore use cases, challenges, design patterns, and best practices. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) via a single API, enabling to easily build and scale Gen AI applications.

article thumbnail

Drive efficiencies with CI/CD best practices on Amazon Lex

AWS Machine Learning

You liked the overall experience and now want to deploy the bot in your production environment, but aren’t sure about best practices for Amazon Lex. In this post, we review the best practices for developing and deploying Amazon Lex bots, enabling you to streamline the end-to-end bot lifecycle and optimize your operations.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

CBRE and AWS perform natural language queries of structured data using Amazon Bedrock

AWS Machine Learning

AWS Prototyping successfully delivered a scalable prototype, which solved CBRE’s business problem with a high accuracy rate (over 95%) and supported reuse of embeddings for similar NLQs, and an API gateway for integration into CBRE’s dashboards. The following diagram illustrates the web interface and API management layer.

article thumbnail

Build an Amazon SageMaker Model Registry approval and promotion workflow with human intervention

AWS Machine Learning

Specialist Data Engineering at Merck, and Prabakaran Mathaiyan, Sr. ML Engineer at Tiger Analytics. The solution uses AWS Lambda , Amazon API Gateway , Amazon EventBridge , and SageMaker to automate the workflow with human approval intervention in the middle. API Gateway invokes a Lambda function to initiate model updates.

APIs 94
article thumbnail

Designing generative AI workloads for resilience

AWS Machine Learning

There are unique considerations when engineering generative AI workloads through a resilience lens. If you’re performing prompt engineering, you should persist your prompts to a reliable data store. Make sure to use best practices for rate limiting, backoff and retry, and load shedding.

article thumbnail

Is your model good? A deep dive into Amazon SageMaker Canvas advanced metrics

AWS Machine Learning

It also enables you to evaluate the models using advanced metrics as if you were a data scientist. In this post, we show how a business analyst can evaluate and understand a classification churn model created with SageMaker Canvas using the Advanced metrics tab. The F1 score provides a balanced evaluation of the model’s performance.

Metrics 79
article thumbnail

Optimize generative AI workloads for environmental sustainability

AWS Machine Learning

In particular, we provide practical best practices for different customization scenarios, including training models from scratch, fine-tuning with additional data using full or parameter-efficient techniques, Retrieval Augmented Generation (RAG), and prompt engineering.