describe llms as a dessert

Generated on 7/30/2024

1 search

Tailscan

This post is brought to you by Tailscan, the best devtool for Tailwind CSS.

I don't have information on describing LLMs (Large Language Models) as a dessert. However, I can provide details on the machine learning and AI content presented at WWDC. Here are some relevant sessions and their chapters:

  1. What’s new in Create ML

    • Introduction
    • Overview
    • App enhancements
    • Object tracking
    • Components
  2. Explore machine learning on Apple platforms

    • Introduction
    • Apple Intelligence
    • ML-powered APIs
    • Running models on device
    • Research
  3. Bring your machine learning and AI models to Apple silicon

    • Introduction
    • Model compression
    • Stateful model
    • Transformer optimization
    • Multifunction model
  4. Deploy machine learning and AI models on-device with Core ML

    • Introduction
    • Integration
    • MLTensor
    • Models with state
    • Multifunction models
    • Performance tools
  5. Train your machine learning and AI models on Apple GPUs

    • Introduction
    • Training frameworks on Apple silicon
    • PyTorch improvements
    • ExecuTorch
    • JAX features

If you have any specific questions about these sessions or need more details, feel free to ask!

Explore machine learning on Apple platforms

Explore machine learning on Apple platforms

Get started with an overview of machine learning frameworks on Apple platforms. Whether you’re implementing your first ML model, or an ML expert, we’ll offer guidance to help you select the right framework for your app’s needs.

Bring your machine learning and AI models to Apple silicon

Bring your machine learning and AI models to Apple silicon

Learn how to optimize your machine learning and AI models to leverage the power of Apple silicon. Review model conversion workflows to prepare your models for on-device deployment. Understand model compression techniques that are compatible with Apple silicon, and at what stages in your model deployment workflow you can apply them. We’ll also explore the tradeoffs between storage size, latency, power usage and accuracy.

What’s new in Create ML

What’s new in Create ML

Explore updates to Create ML, including interactive data source previews and a new template for building object tracking models for visionOS apps. We’ll also cover important framework improvements, including new time-series forecasting and classification APIs.

Deploy machine learning and AI models on-device with Core ML

Deploy machine learning and AI models on-device with Core ML

Learn new ways to optimize speed and memory performance when you convert and run machine learning and AI models through Core ML. We’ll cover new options for model representations, performance insights, execution, and model stitching which can be used together to create compelling and private on-device experiences.

Train your machine learning and AI models on Apple GPUs

Train your machine learning and AI models on Apple GPUs

Learn how to train your models on Apple Silicon with Metal for PyTorch, JAX and TensorFlow. Take advantage of new attention operations and quantization support for improved transformer model performance on your devices.