Release Notes¶
May 19, 2025 - LEIP Optimize 4.2¶
Updates and Enhancements¶
- Jetpack 6.0 Support: LEIP Optimize users can now target NVIDIA Jetson devices with Jetpack 6.0.
May 02, 2025 - LEIP Optimize 4.1¶
Updates and Enhancements¶
- Android Support: LEIP Optimize users can now target Android CPU devices. The included Docker image comes preconfigured with the Android cross compiler.
- Pre-quantized Model Ingestion: Users can now ingest and compile pre-quantized models.
- New Tag Search API: Added
forge.search_target_tags()
, enabling users to search for target tags containing the specified query string (case-insensitive).
Bug Fixes¶
- ONNX Dynamic Quantization: Resolved an issue where dynamic quantization incorrectly required calibration.
January 31, 2025 - LEIP Optimize 4.0.1¶
Bug Fixes¶
- ONNX export: Fixed an issue with TensorRT ONNX export.
- ONNX Model Precision: Ensured correct model precision settings for ONNX-exported models.
October 23, 2024 - LEIP Optimize 4.0¶
Updates and Enhancements¶
-
Compiler Framework is now LEIP Optimize.
-
Introduces Forge: A powerful tool that allows you to easily edit, manipulate, and introspect model compute graphs in an Intermediate Representation. Forge is capable of ingesting models from a wide variety of sources and compiling and optimizing them for a diverse set of target hardware.
-
ONNX Integration: Users can now optimize ONNX models with the option of maintaining the ONNX graph representation.
-
Model Support: LEIP Optimize now supports a much broader set of models; for example, it now supports 90% of the top 50 computer vision models from Hugging Face.
-
Pip Install: LEIP Optimize is now available for installation via
pip
.