Remove Accountability Remove APIs Remove Big data Remove Strategy
article thumbnail

Secure Amazon SageMaker Studio presigned URLs Part 3: Multi-account private API access to Studio

AWS Machine Learning

One important aspect of this foundation is to organize their AWS environment following a multi-account strategy. In this post, we show how you can extend that architecture to multiple accounts to support multiple LOBs. In this post, we show how you can extend that architecture to multiple accounts to support multiple LOBs.

APIs 69
article thumbnail

Secure Amazon SageMaker Studio presigned URLs Part 2: Private API with JWT authentication

AWS Machine Learning

In this post, we will continue to build on top of the previous solution to demonstrate how to build a private API Gateway via Amazon API Gateway as a proxy interface to generate and access Amazon SageMaker presigned URLs. The user invokes createStudioPresignedUrl API on API Gateway along with a token in the header.

APIs 79
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Use Amazon SageMaker pipeline sharing to view or manage pipelines across AWS accounts

AWS Machine Learning

On August 9, 2022, we announced the general availability of cross-account sharing of Amazon SageMaker Pipelines entities. You can now use cross-account support for Amazon SageMaker Pipelines to share pipeline entities across AWS accounts and access shared pipelines directly through Amazon SageMaker API calls.

article thumbnail

Securing MLflow in AWS: Fine-grained access control with AWS native services

AWS Machine Learning

In this post, we address these limitations by implementing the access control outside of the MLflow server and offloading authentication and authorization tasks to Amazon API Gateway , where we implement fine-grained access control mechanisms at the resource level using Identity and Access Management (IAM). Adds an IAM authorizer.

APIs 69
article thumbnail

Designing generative AI workloads for resilience

AWS Machine Learning

If you’re performing prompt engineering, you should persist your prompts to a reliable data store. That will safeguard your prompts in case of accidental loss or as part of your overall disaster recovery strategy. In the low-latency case, you need to account for the time it takes to generate the embedding vectors.

article thumbnail

Use Amazon SageMaker Model Card sharing to improve model governance

AWS Machine Learning

As you scale your models, projects, and teams, as a best practice we recommend that you adopt a multi-account strategy that provides project and team isolation for ML model development and deployment. Depending on your governance requirements, Data Science & Dev accounts can be merged into a single AWS account.

article thumbnail

Architect defense-in-depth security for generative AI applications using the OWASP Top 10 for LLMs

AWS Machine Learning

This post provides three guided steps to architect risk management strategies while developing generative AI applications using LLMs. In addition to awareness, your teams should take action to account for generative AI in governance, assurance, and compliance validation practices.