Remove node 18
article thumbnail

How BigBasket improved AI-enabled checkout at their physical stores using Amazon SageMaker

AWS Machine Learning

Note the following calculations: The size of the global batch is (number of nodes in a cluster) * (number of GPUs per node) * (per batch shard) A batch shard (small batch) is a subset of the dataset assigned to each GPU (worker) per iteration BigBasket used the SMDDP library to reduce their overall training time.

article thumbnail

Achieve four times higher ML inference throughput at three times lower cost per inference with Amazon EC2 G5 instances for NLP and CV PyTorch models

AWS Machine Learning

Every request can be batched to ensure full utilization of the accelerator, especially for small requests that may not fully utilize the compute node. In our study, through a parametric sweep of batch size and queries from multi-threaded clients, the throughput vs. latency curve is determined.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

How Sophos trains a powerful, lightweight PDF malware detector at ultra scale with Amazon SageMaker

AWS Machine Learning

The main reason SophosAI chose SageMaker is the ability to benefit from the fully managed distributed training on multi-node CPU instances by simply specifying more than one instance. SageMaker automatically splits the data across nodes, aggregates the results across peer nodes, and generates a single model. Number of Samples.

Scripts 80
article thumbnail

Optimize data preparation with new features in AWS SageMaker Data Wrangler

AWS Machine Learning

For Inference output node, enter the destination node corresponding to the transforms applied to your training data. In our example, the URI is s3://sagemaker-us-east-1-43257985977/data_wrangler_flows/example-2023-05-30T12-20-18.tar.gz. For Inference artifact name, enter the name of your inference artifact (with.tar.gz

article thumbnail

Unlock ML insights using the Amazon SageMaker Feature Store Feature Processor

AWS Machine Learning

47495| 1686627154| | 18| Acura TLX Advance| 2023| New| NA|52245.00|52245| The @remote decorator runs the local Python code as a single or multi-node SageMaker training job. primary node exiting. .| 2023| New| NA|55345.00| NA| 1686627154| | 14| Acura TLX A-Spec| 2023| New| NA|50195.00|50195| 24995.00| 22334.0|

article thumbnail

Large-scale revenue forecasting at Bosch with Amazon Forecast and Amazon SageMaker custom models

AWS Machine Learning

The revenue needs to be forecasted at every node in the hierarchy with a forecasting horizon of 12 months into the future. We use a context window with monthly revenue data from the past 18 months, selected via HPO in the backtest window from July 2018 to June 2019. Monthly historical data is available.

APIs 82
article thumbnail

Amazon SageMaker built-in LightGBM now offers distributed training using Dask

AWS Machine Learning

Dataset Size Number of Examples Number of Features Problem Type lending club loan ~10 G 1, 439, 141 955 Binary classification code ~10 G 18, 268, 221 9 Multi-class classification (number of classes in target: 10) NYC taxi ~0.5 The data statistics are presented as follows.