Many C programmers need proper generic programming mechanisms (perhaps something like Zig's comptime) in C, but macros are the worst possible approach, and they don't want to switch to a different language like C++. As a result, they struggle with these issues. This is what I think the standardization committee should focus on, but instead, they introduced _Generic.
The biggest issue is the ABI for C - it's the lingua-franca of language interoperability and can't really be changed - so whatever approach is taken it needs to be fully compatible with the existing ABI. `_Generic` is certainly flawed but doesn't cause any breaking ABI changes.
That's also a major reason why you'd use C rather than C++. The C++ ABI is terrible for language interoperability. It's common for C++ libraries to wrap their API in C so that it can be used from other language's FFIs.
Aside from that another reason we prefer C to C++ is because we don't want vtables. I think there's room for a `C+` language, by which I mean C+templates and not C+classes - perhaps with an ABI which is a subset of the C++ ABI but superset of the C ABI.
> I think there's room for a `C+` language, by which I mean C+templates and not C+classes - perhaps with an ABI which is a subset of the C++ ABI but superset of the C ABI.
indeed, i have spoken to a lot of my colleagues about just that. if overloading is not allowed, perhaps there is still some hope for a backwards compatible abi ?
I don't think we can get away with just using the C ABI - or even if we did, we would need a standardized name-mangling scheme, and then any language which consumes the ABI would need to be aware of this name-mangling scheme, so it would effectively be a new ABI.
We might be able to make this ABI compatible with C if no templates are used, which wouldn't cause breaking changes - but for other compilers to be able to use templates they would need to opt-in to the new scheme. For that we'd probably want to augment libffi to include completely new functions for dealing with templates. Eg, we'd have an ffi_template_type, and an ffi_prep_template for which we supply its type arguments - then an ffi_prep_templated_cif for calls which use templates, and so forth. It would basically be a new API - but probably still more practical than trying to support the C++ ABI.
This is true. I agree with this statement. It's the holy cow of C. However, the problem with generic programming and metaprogramming isn't going away, and many people continue to struggle with it. Introducing something like compile-time reflection might be a solution...
The most insulting thing about _Generic is the name. Really? _Generic? For a type-based switch with horrific syntax? What were they thinking...
That said, generic programming in C isn't that bad, just very annoying.
To me the best approach is to write the code for a concrete type (like Vec_int), make sure everything is working, and then do the following:
A macro Vec(T) sets up the struct. It can then be wrapped in a typedef like typedef Vec(int) Vec_i;
For each function, like vec_append(...), copy the body into a macro VEC_APPEND(...).
Then for each relevant type T: copy paste all the function declarations, then do a manual find/replace to give them some suffix and fill in the body with a call to the macro (to avoid any issues with expressions being executed multiple times in a macro body).
Is it annoying? Definitely. Is it unmanageable? Not really. Some people don't even bother with this last bit and just use the macros to inline the code everywhere.
Some macros can delegate to void*-based helpers to minimize the bloating.
EDIT: I almost dread to suggest this but CMake's configure_file command works great to implement generic files...
There are less annoying ways to implement this in C. There are at least two different common approaches which avoid having macro code for the generic functions:
The first is to put this into an include file
#define type_argument int
#include <vector.h>
Then inside vector.h the code looks like regular C code, except where you insert the argument.
foo_ ## type_argument ( ... )
The other is to write generic code using void pointers or container_of as regular functions, and only have one-line macros as type safe wrappers around it. The optimizer will be able to specialize it, and it avoids compile-time explosion of code during monomorphization,
I do not think that templates are less annoying in practice. My experience with templates is rather poor.
An idea I had was to implement a FUSE filesystem for includes, so instead of the separate `#define type_argument` (and `#undef type_argument` that would need to follow the #include), we could stick the type argument in the included filename.
The written `vector.h(type_argument)` file could just be a regular C header or an m4 file which has `type_argument` in its template. When requesting `vector.h(int32_t)` the FUSE filesystem would effectively give the output of calling `gcc -E` or `m4` on the template file as the content of the file being requested.
Eg, if `vector.h(type_argument)` was an m4 file containing:
But the idea is to make it transparent so that existing tools just see the pre-processed file and don't need to call `m4` manually. We would need to mount each include directory that uses this approach using said filesystem. This shouldn't require changing a project's structure as we could use the existing `include/` or `src/` directory as input when mounting, and just pick some new directory name such as `cfuse/include` or `cfuse/src`, and mount a new directory `cfuse` in the project's root directory. The change we'd need to make is in any Makefiles or other parts of the build, where instead of `gcc -Iinclude` we'd have `gcc -Icfuse/include`. Any non-templated headers in `include/` would just appear as live copies in cfuse/include/, so in theory this could work without causing anything to break.
Those techniques being less annoying is highly debatable ;). Working with void* is annoying, header includes look quite ugly with the ## concatenation everywhere or even a wrapper macro. It also gets much worse when you need to customize the suffix (because type_argument is char* or whatever).
Sometimes the best option is an external script to instantiate a template file.
It may be debatable, but I would say C++'s template syntax is not nicer. I do not think working with void pointers is annoying, but I also prefer the container_of approach. The ## certainly has the limitation that you need to name things first, but I do not think this much of a downside.
Hey, I understand you and know this stuff well, having worked with it for many years as a C dev. To be honest, this isn't how things should generally be done. Macros were invented for very simple problems. Yes, we can abuse them as much as possible (for example, in C++, we discovered SFINAE, which is an ugly, unreadable technique that wasn't part of the programming language designer's intent but rather like a joke that people started abusing), but is it worth it?
I'm currently at a crossroads: C++ or Zig. One is very popular with a large community, amazing projects, but has lots of ugly design decisions and myriad rules you must know (this is a big pain, it seems like even Stroustrup can't handle all of them). The other is very close to what I want from C, but it's not stable and not popular.
Its only real issue is that people will constantly tell you how bad it is and how their language of choice is so much better. But if you look at how things work out in practice, you can usually do things very nicely in C.
My choice in this situation is indeed C, but every once in a while I hit a problem that makes me yearn for better metaprogramming.
Perfect hashing that you’d ideally use two different approaches for depending on whether the platform has a cheap popcount (hi AArch32), but to avoid complicating the build you give up and emulate popcount instead. Hundreds of thousands of lines of asynchronous I/O code written in a manual continuation-passing style, with random, occasionally problematic blocking synchronization sprinkled all over because the programmer simply could not be bothered anymore to untangle this nested loop, and with a dynamic allocation for each async frame because that’s the path of least resistance. The intense awkwardness of the state-machine / regular-expression code generators, well-developed as they are. Hoping the compiler will merge the `int` and `long` code paths when their machine representations are identical, but not seeing it happen because functions must have unique addresses. Resorting to .init_array—and slowing down startup—because the linker is too rigid to compute this one known-constant value. And yes, polymorphic datastructures.
I don’t really see anybody do noticeably better than C; I think only Zig and Odin (perhaps also Hare and Virgil?) are even competing in the same category. But I can’t help feeling that things could be much better. Then I look at the graveyard of attempted extensions both special-purpose (CPC[1]) and general (Xoc[2]) and despair.
> Many vector types include a capacity field, so that resizing on every push can be avoided. I do not include one, because simplicity is more important to me and realloc often does this already internally. In most scenarios, the performance is already good enough.
I think this is the wrong decision (for a generic array library).
Without a capacity field, each push operation potentially triggers a realloc, causing O(n) copying and possible memory fragmentation - especially problematic for large vectors or performance-critical code.
No, I claim that the performance of realloc is good enough for most use cases because it also does not move the memory in case there is already enough space left.
I then mention that for other use cases, you can maintain a capacity field only in the part of the code where you need this.
Whether this is the right design for everybody, I do not know, but so far it is what I prefer for myself.
In all my current code where I tried it makes no noticeable difference, and I am not a fan of premature optimization. But then, I could always switch to the alternative API.
Popular allocators will indeed grow your allocation in-place without moving when possible. This is essentially the same as if you'd tracked it yourself in your vector and grown it once in a while, though it will work with bytes instead of number of items. See for instance the size classes in jemalloc at https://jemalloc.net/jemalloc.3.html. If you ask for 1 byte, you actually have 8 bytes, so realloc within the same size class will be cheap compared to actually moving.
Exactly! Why would I want to add my own memory management logic on top of the memory management logic that already exist.
One valid reason might be that I can't rely on realloc not be poor, but then I would rather use my own special allocation function. Other valid reasons would be to have very precise control or certain guarantees, but then I would prefer a different interface. In any case, I do not think that this logic belongs into my vector. But it is also possible that I change my mind on this...
I think it's a very interesting design choice. I haven't read the code, so maybe you already thought of this, but one idea that comes to mind is that instead of reallocing new_size, you realloc f(new_size) where f is some function that rounds up to discrete steps. This should ensure good asymptotics as realloc can then realize that the requested allocation size is identical to the current one and nothing needs to be done.
However one possible issue is if someone pushes and pops repeated just at the boundary where f increases in value. To address that you would have to use more advanced techniques, and I think "cheat" by inspecting internal structures of the allocator.
Edit: malloc_usable_size could be used for this purpose I think.
I made a pretty convenient C vector library a while back that lets you use the [] operator directly on the vector pointers: https://github.com/Mashpoe/c-vector
I think the overwhelmingly better approach for C is codegen here. Better ergonomics, tab completion, less error prone, etc. As long as your codegen is solid!
It is also not clear how you get tap completion with code generation. But you could also get tab completion here, somebody just has to add this to the tab completion logic.
It's amazing how many people try to write generic containers for C, when there is already a perfect solution for that, called C++. It's impossible to write generic type-safe code in C, and this version resorts to using GCC extensions to the language (note the ({…}) expressions).
For those afraid of C++: you don't have to use all of it at once, and compilers have been great for the last few decades. You can easily port C code to C++ (often you don't have to do anything at all). Just try it out and reassess the objections you have.
My problem with C++, and maybe this is just me, is RAII.
Now, Resource Aquisition Is Initialization is correct, but the corollary is not generally true, which is to say, my variable going out of scope does not generally mean I want to de-aquire that resource.
So, sooner or later, everything gets wrapped in a reference counting smart pointer. And reference counting always seemed to me to be a primitive or last-resort memory managment strategy.
Your problem is not with RAII, but with reference counting, which you correctly identified should be the last resort, not the default; at least for the applications typically written in C++.
> my variable going out of scope does not generally mean I want to de-aquire that resource.
But it does! When an object goes out of scope, nobody can/shall use it anymore, so of course it should release its (remaining) resources. If you want to hold on the object, you need to revisit its lifetime and ownership, but that's independent from RAII.
Except that I find C++ far from being perfect. In fact, I switched from C++ to C (a while ago) to avoid its issues and I am being much happier I also find my vec(int) much nicer.
In fact, we are at the moment ripping out some template code in a C code base which has some C++ for cuda in it, and this one file with C++ templates almost doubles the compilation time of the complete project (with ~700 source files). IMHO it is grotesque how bad it is.
The post is more of a quick-n-dirty (and rather trivial) proof of concept as the code includes only sporadical checks for allocation errors and then adds a hand-wavy disclaimer to improve it as needed.
E.g. in production code this
if (!vec_ptr) // memory out
abort();
for (int i = 0; i < 10; i++)
vec_push(int, &vec_ptr, i);
should really be
if (!vec_ptr) // memory out
abort();
for (int i = 0; i < 10; i++)
if (! vec_push(int, &vec_ptr, i))
abort();
If this is in a library code, then I tend to disagree. As an user of a library, I would rather be able to handle errors the way I want, I do not want the library to decide this for me, so just return an error value, like "VEC_ERR_NOMEM", or whatever.
You don't need an explicit allocator: you can create an empty object and vec_push() does the magic of (re)allocating the memory when needed.
Instead, what is missing is an automatic deallocator, one that's automatically called when the variable goes out of scope. For example:
{
vec(int) v={}; /* no extra room allocated */
vec_push(v,1); /* space allocated */
... use v ...
} /* here the space is dellocated, then v is released */
This example doesn't use the same definition of vec as TFA, but something more similar to 'span' by the same author.
That's also a major reason why you'd use C rather than C++. The C++ ABI is terrible for language interoperability. It's common for C++ libraries to wrap their API in C so that it can be used from other language's FFIs.
Aside from that another reason we prefer C to C++ is because we don't want vtables. I think there's room for a `C+` language, by which I mean C+templates and not C+classes - perhaps with an ABI which is a subset of the C++ ABI but superset of the C ABI.
indeed, i have spoken to a lot of my colleagues about just that. if overloading is not allowed, perhaps there is still some hope for a backwards compatible abi ?
We might be able to make this ABI compatible with C if no templates are used, which wouldn't cause breaking changes - but for other compilers to be able to use templates they would need to opt-in to the new scheme. For that we'd probably want to augment libffi to include completely new functions for dealing with templates. Eg, we'd have an ffi_template_type, and an ffi_prep_template for which we supply its type arguments - then an ffi_prep_templated_cif for calls which use templates, and so forth. It would basically be a new API - but probably still more practical than trying to support the C++ ABI.
That said, generic programming in C isn't that bad, just very annoying.
To me the best approach is to write the code for a concrete type (like Vec_int), make sure everything is working, and then do the following:
A macro Vec(T) sets up the struct. It can then be wrapped in a typedef like typedef Vec(int) Vec_i;
For each function, like vec_append(...), copy the body into a macro VEC_APPEND(...).
Then for each relevant type T: copy paste all the function declarations, then do a manual find/replace to give them some suffix and fill in the body with a call to the macro (to avoid any issues with expressions being executed multiple times in a macro body).
Is it annoying? Definitely. Is it unmanageable? Not really. Some people don't even bother with this last bit and just use the macros to inline the code everywhere.
Some macros can delegate to void*-based helpers to minimize the bloating.
EDIT: I almost dread to suggest this but CMake's configure_file command works great to implement generic files...
The first is to put this into an include file
Then inside vector.h the code looks like regular C code, except where you insert the argument. The other is to write generic code using void pointers or container_of as regular functions, and only have one-line macros as type safe wrappers around it. The optimizer will be able to specialize it, and it avoids compile-time explosion of code during monomorphization,I do not think that templates are less annoying in practice. My experience with templates is rather poor.
Eg, if `vector.h(type_argument)` was an m4 file containing:
Then `m4 -D type_argument=int32_t vector.h(type_argument)` gives the output: But the idea is to make it transparent so that existing tools just see the pre-processed file and don't need to call `m4` manually. We would need to mount each include directory that uses this approach using said filesystem. This shouldn't require changing a project's structure as we could use the existing `include/` or `src/` directory as input when mounting, and just pick some new directory name such as `cfuse/include` or `cfuse/src`, and mount a new directory `cfuse` in the project's root directory. The change we'd need to make is in any Makefiles or other parts of the build, where instead of `gcc -Iinclude` we'd have `gcc -Icfuse/include`. Any non-templated headers in `include/` would just appear as live copies in cfuse/include/, so in theory this could work without causing anything to break.Sometimes the best option is an external script to instantiate a template file.
BTW, here is some generic code in C using a variadic type. I think this quite nice. https://godbolt.org/z/jxz6Y6f9x
Running a program for meta programming are always a possibility, and I would agree that sometimes the best solution.
Its only real issue is that people will constantly tell you how bad it is and how their language of choice is so much better. But if you look at how things work out in practice, you can usually do things very nicely in C.
Perfect hashing that you’d ideally use two different approaches for depending on whether the platform has a cheap popcount (hi AArch32), but to avoid complicating the build you give up and emulate popcount instead. Hundreds of thousands of lines of asynchronous I/O code written in a manual continuation-passing style, with random, occasionally problematic blocking synchronization sprinkled all over because the programmer simply could not be bothered anymore to untangle this nested loop, and with a dynamic allocation for each async frame because that’s the path of least resistance. The intense awkwardness of the state-machine / regular-expression code generators, well-developed as they are. Hoping the compiler will merge the `int` and `long` code paths when their machine representations are identical, but not seeing it happen because functions must have unique addresses. Resorting to .init_array—and slowing down startup—because the linker is too rigid to compute this one known-constant value. And yes, polymorphic datastructures.
I don’t really see anybody do noticeably better than C; I think only Zig and Odin (perhaps also Hare and Virgil?) are even competing in the same category. But I can’t help feeling that things could be much better. Then I look at the graveyard of attempted extensions both special-purpose (CPC[1]) and general (Xoc[2]) and despair.
[1] https://github.com/kerneis/cpc
[2] https://pdos.csail.mit.edu/archive/xoc/
I think this is the wrong decision (for a generic array library).
Is Martin claiming that realloc is "often" maintaining a O(1) growable array for us?
That's what the analogous types in C++ or Rust, or indeed Java, Go, C# etc. provide.
I then mention that for other use cases, you can maintain a capacity field only in the part of the code where you need this.
Whether this is the right design for everybody, I do not know, but so far it is what I prefer for myself.
In all my current code where I tried it makes no noticeable difference, and I am not a fan of premature optimization. But then, I could always switch to the alternative API.
One valid reason might be that I can't rely on realloc not be poor, but then I would rather use my own special allocation function. Other valid reasons would be to have very precise control or certain guarantees, but then I would prefer a different interface. In any case, I do not think that this logic belongs into my vector. But it is also possible that I change my mind on this...
However one possible issue is if someone pushes and pops repeated just at the boundary where f increases in value. To address that you would have to use more advanced techniques, and I think "cheat" by inspecting internal structures of the allocator.
Edit: malloc_usable_size could be used for this purpose I think.
I try do this here (this code is not tested and may not be up-to-date): https://github.com/uecker/noplate/blob/main/src/vec.h#L30
The issue with the boundary is what I meant with hysteresis in the article.
A neat project that was posted here a while back uses it: https://x.com/kotsoft/status/1792295331582869891
It is also not clear how you get tap completion with code generation. But you could also get tab completion here, somebody just has to add this to the tab completion logic.
His code is here: https://github.com/drh/cii
For those afraid of C++: you don't have to use all of it at once, and compilers have been great for the last few decades. You can easily port C code to C++ (often you don't have to do anything at all). Just try it out and reassess the objections you have.
Now, Resource Aquisition Is Initialization is correct, but the corollary is not generally true, which is to say, my variable going out of scope does not generally mean I want to de-aquire that resource.
So, sooner or later, everything gets wrapped in a reference counting smart pointer. And reference counting always seemed to me to be a primitive or last-resort memory managment strategy.
But it does! When an object goes out of scope, nobody can/shall use it anymore, so of course it should release its (remaining) resources. If you want to hold on the object, you need to revisit its lifetime and ownership, but that's independent from RAII.
In fact, we are at the moment ripping out some template code in a C code base which has some C++ for cuda in it, and this one file with C++ templates almost doubles the compilation time of the complete project (with ~700 source files). IMHO it is grotesque how bad it is.
E.g. in production code this
should really be but it doesn't really roll of the tongue.Instead, what is missing is an automatic deallocator, one that's automatically called when the variable goes out of scope. For example:
This example doesn't use the same definition of vec as TFA, but something more similar to 'span' by the same author.