- C language
- C type
- here
float --single precision (32bit) floating point type
"Float" is a single precision (32bit) floating point type. Single precision simply means 32bit. Floating point is a form of software representation of a decimal, and in the IEEE 754 format, it consists of a sign part, a mantissa part, and an exponent part. Think of the mantissa as the number of significant digits.
# Single precision (32bit) floating point type float
The maximum value of a floating point number that can be represented by a single precision (32bit) floating point type is defined by FTL_MAX, and the minimum value is defined by FLT_MIN.
float sample code
This is a sample code using float. The "f" suffix is specified to clearly indicate that the type of the floating-point literal is a float type.
#include <stdio.h> #include <stdint.h> int main (void) { float num = 5.4f; printf("%f\n", num); }
"%F" is used in the format specifier of printf function.
Output result.
5.400000
16-bit wide integers can be represented by floats
Float is a floating point type, but since an integer is a floating point value, it can also represent an integer.
Keep in mind that the number is a 16-bit wide integer ( int16_t, uint16_t) means that it can be expressed by float. On the other hand, 32-bit wide integers ( int32_t, uint32_t) are available on float. It cannot be expressed and requires a double.
Relationship between float type, deep learning and GPU
Deep learning uses the GPU to perform calculations to speed up the calculation process. GPU processing is typically parallel computing of float-width numbers. The GPU was originally a processor for screen display, and each unit has a float width in order to increase the density and perform parallel calculations. This was a good match for the numerical calculations required for deep learning. You can also GPU / CUDA binding in Perl.