This interactive converter helps you visualize how floating-point and integer numbers are represented at the binary level.
This converter follows the IEEE 754
floating-point standard and supports standard formats (FP64, FP32, FP16) as well as specialized formats used in
machine learning and AI (BF16, TF32, FP8 E4M3, FP8 E5M2). Integer formats commonly used in ML quantization are also
supported (INT32, INT16, INT8, INT4, and their unsigned variants). Custom formats can be used as well by specifying
the number of sign, exponent, and mantissa bits. This tool is mobile-friendly and supports screens of all sizes.
How to Use
Choose Your Input Format
Select from standard formats (FP32, FP16, BF16, etc.) or customize the number of sign, exponent, and mantissa bits.
Enter a Value
Type any decimal number, or use the value presets to explore special cases like infinity, NaN, or the smallest/largest representable numbers. You can also edit the binary or hexadecimal representation directly.
Select Output Format
Choose a different format to see how your number would be represented. The tool automatically shows precision loss when converting between formats.
Explore Components
View the breakdown of sign bit, exponent, and mantissa. Interactive binary checkboxes let you flip individual bits to see how they affect the final value.
Common Use Cases
ML Engineers: Understand how BF16, FP16, and integer quantization (INT8, INT4) affect model precision
Students: Learn how the IEEE 754 standard and two's complement integers work by visualizing binary representations
Developers: Debug floating-point precision issues by examining exact bit patterns
Hardware Engineers: Design and validate custom floating-point and integer formats for specialized applications