Macro description-#define

Explains how to define and use macros. A macro is a string that is expanded in the source code by the preprocessor's "#define" instruction before the source code is compiled.

#define macro name string

Macro names can use the same characters that can be used in C identifiers. The string is any string and can contain spaces.

Macros have several possible uses, so I will introduce them.

Macro constants

The first use of macros is to define constants. Here, we will call this a macro constant.

Macro constants are constructed in uppercase, as is customary in the C language. It may include underscores.

#define MYAPP_PI 3.14
#define MYAPP_PARALLEL_PROCESS 10

This is a sample program that uses macro constants. "MYAPP_PI" is expanded to "3.14" before it is compiled.

#include <stdio.h>

#define MYAPP_PI 3.14

int main (void) {
  double num = 3 * MYAPP_PI;
  
  printf("%f\n", num);
}

This is the output result.

9.420000

Use enum for integer constants

For integer constants, it is recommended to use enum "enum" instead of macros. The enum is processed by the compiler and has no side effects.

enum {
  MYAPP_PARALLEL_PROCESS = 10
}

Do not use macros as much as possible

When creating a program, we recommend that you do not use macros as much as possible.

Macro expansion tends to cause unintended side effects.

For example, the following side effects.

#define FOO 2 + 3

What is the result of the following formula?

1 * FOO * 3

It will be expanded as follows.

1 * 2 + 3 * 3

2 + 9 gives 11. Didn't FOO expect it to be 5?

#define FOO (2 + 3)

If you write, it will work correctly.

In addition, the argument names of function prototype declarations are expanded by macros, which causes unpredictable side effects.

I won't write down all the side effects, but keep in mind that macros tend to get error-prone side effects into your program.

Side effects become more difficult to track down as the program grows. And programs tend to grow in size in response to feature requests.

"Today's a little painful, but it'll be okay" will lead to the future "It's terrible, it's no good anymore".

When do you use macros?

If you shouldn't use it as much as possible, then when do you use macros?

One is the definition of floating point constants.

Other than that?

For example, suppose you want to check that a library with numerical calculations exists.

And if that library exists, you want to use the features of that library, for example because it can speed up performance.

In such a case, it is essential to describe by combining "#ifdef" and a macro.

On the other hand, suppose you're writing a portable application using only the limited C language specifications that work in any environment.

In such cases, you can configure your application with minimal macro definitions, such as floating-point constant definitions.

If macros are heavily used in such applications, it's likely that something is wrong.

On the other hand, again, in some applications, there must be a significant number of macros. It may be an application that was created to absorb environmental differences. For example, in order for users to see the same socket API, we have to absorb the difference between Windows and Linux / UNIX socket communication. Not to be.

Also, since there is a condition that you want to maximize the performance, you can also use macros if you want to expand manually without expecting inline expansion of the compiler.

So, I can't say it unconditionally, but I recommend that you don't use macros when you don't need them, and think carefully about alternatives when you need them.

Associated Information