Hell Oh Entropy!

Life, Code and everything in between

Hello FOSSASIA: Revisiting the event *and* the first program we write in C

Posted: Mar 19, 2017, 10:15

I was at FOSSAsia this weekend to deliver a workshop on the very basics of programming. It ended a pretty rough couple of weeks for me, with travel to Budapest (for Linaro Connect) followed immediately by the travel to Singapore. It seems like I don’t travel east in the timezone very well and the effects were visible with me napping at odd hours and generally looking groggy through the weekend at Singapore. It was however all worth it because despite a number of glitches, I had some real positives to take back from the conference.

The conference

FOSSAsia had been on my list of conferences to visit due to Kushal Das telling me time and again that I’d meet interesting people there. I had proposed a talk (since I can’t justify the travel just to attend) a couple of years ago but dropped out since I could not find sponsors for my talk and FOSSAsia was not interested in sponsoring me either. Last year I met Hong at SHD Belgaum and she invited me to speak at FOSSAsia. I gladly accepted since Nisha was going to volunteer anyway. However as things turned out in the end, my talk got accepted and I found sponsorship for travel and stay (courtesy Linaro), but Nisha could not attend.

I came (I’m still in SG, waiting for my flight) half-heartedly since Nisha did not accompany me, but the travel seemed worth it in the end. I met some very interesting people and was able to deliver a workshop that I was satisfied with.

Speaking of the workshop…

I was scheduled to talk on the last day (Sunday) first thing in the morning and I was pretty sure I was going to be the only person standing with nobody in their right minds waking up early on a Sunday for a workshop. A Sunday workshop also meant that I knew the venue and its deficiencies - the “Scientist for a Day” part of the Science Center was a disaster since it was completely open and noisy, with lunch being served right next to the room on the first day. I was wary of that, but the Sunday morning slot protected me from that and my workshop personally without such glitches.

The workshop content itself was based on an impromptu ‘workshop’ I did at FUDCon Pune 2015, but a little more organized. Here’s a blow by blow account of the talk for those who missed it, and also a reference for those who attended and would like a reference to go back to in future.

Hell Oh World

It all starts with this program. Hello World is what we all say when we are looking to learn a new language. However, after Hello World, we move up to learn the syntax of the language and then try to solve more complex user problems, ignoring the wonderful things that happened underneath Hello World to make it all happen. This session is an attempt to take a brief look into these depths. Since I am a bit of a cynic, my Hello World program is slightly different:

#include <stdio.h>

int
main (void)
{
  printf ("Hell Oh World!\n");
  return 0;
}

We compile this program:

$ gcc -o helloworld helloworld.c

We can see that the program prints the result just fine:

$ ./helloworld 
Hell Oh World!

But then there is so much that went into making that program. Lets take a look at the binary by using a process called disassembling, which prints the binary program into a human-readable format - well at least readable to humans that know assembly language programming.

$ objdump -d helloworld

We wrote only one function: main, so we should see only that. Instead however, we see so many functions that are present in the binary In fact, you you were lied to when they told back in college that main() is the entry point of the program! The entry point is the function called _start, which calls a function in the GNU C Library called __libc_start_main, which in turn calls the main function. When you invoke the compiler to build the helloworld program, you’re actually running a number of commands in sequence. In general, you do the following steps:

let us look at these steps one by one.

Preprocessing

gcc -E -o helloworld.i helloworld.c

Run this command instead of the first one to produce a pre-processed file. You’ll see that the resultant file has hundreds of lines of code and among those hundreds of lines, is this one line that we need: the prototype for printf so that the compiler identifies the call printf:

extern int printf (const char *__restrict __format, ...);

It is possible to just use this extern decl and avoid including the entire header file, but it is not good practice. The overhead of maintaining something like this is unnecessary, especially when the compiler can do the job of eliminating the unused bits anyway. We are better off just including a couple of headers and getting all declarations.

Compiling the preprocessed source

Contrary to popular belief, the compiler does not compile into binary .o - it only generates assembly code. It then calls the assembler in the binutils project to convert the assembly into object code.

$ gcc -S -o helloworld.s helloworld.i

The assembly code is now just this:

    .file   "helloworld.i"
    .section    .rodata
.LC0:
    .string "Hell Oh World!"
    .text
    .globl  main
    .type   main, @function
main:
.LFB0:
    .cfi_startproc
    pushq   %rbp
    .cfi_def_cfa_offset 16
    .cfi_offset 6, -16
    movq    %rsp, %rbp
    .cfi_def_cfa_register 6
    movl    $.LC0, %edi
    call    puts
    movl    $0, %eax
    popq    %rbp
    .cfi_def_cfa 7, 8
    ret
    .cfi_endproc
.LFE0:
    .size   main, .-main
    .ident  "GCC: (GNU) 6.3.1 20161221 (Red Hat 6.3.1-1)"
    .section    .note.GNU-stack,"",@progbits

which is just the main function and nothing else. The interesting thing there though is that the printf function call is replaced with puts because the input to printf is just a string without any format and puts is much faster than printf in such cases. This is an optimization by gcc to make code run faster. In fact, the code runs close to 200 optimization passes to attempt to improve the quality of the generated assembly code. However, it does not add all of those additional functions.

So does the assembler add the rest of the gunk?

Assembling the assembly

gcc -c -o helloworld.o helloworld.s

Here is how we assemble the generated assembly source into an object file. The generated assembly can again be disassembled using objdump and we see this:

helloworld.o:     file format elf64-x86-64


Disassembly of section .text:

0000000000000000 
: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: bf 00 00 00 00 mov $0x0,%edi 9: e8 00 00 00 00 callq e e: b8 00 00 00 00 mov $0x0,%eax 13: 5d pop %rbp 14: c3 retq

which is no more than what we saw with the compiler, just in binary format. So it surely is the linker adding all of the gunk.

Putting it all together

Now that we know that it is the linker adding all of the additional stuff into helloworld, lets look at how gcc invokes the linker. To do this, we need to add a -v to the gcc command. You’ll get a lot of output, but the relevant bit is this:

$ gcc -v -o helloworld helloworld.c
...

...
/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/collect2 -plugin /usr/libexec/gcc/x86_64-redhat-linux/6.3.1/liblto_plugin.so -plugin-opt=/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/lto-wrapper -plugin-opt=-fresolution=/tmp/ccEdWzG5.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o helloworld /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crti.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/crtbegin.o -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1 -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../.. /tmp/cc3m0We9.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/lib/gcc/x86_64-redhat-linux/6.3.1/crtend.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crtn.o
COLLECT_GCC_OPTIONS='-v' '-o' 'helloworld' '-mtune=generic' '-march=x86-64'

This is a long command, but the main points of interest are all of the object files (*.o) that get linked in because the linker concatenates those and then resolves dependencies of unresolved references to functions (only puts in this case) among those and all of the libraries (libc.so via -lc, libgcc.so via -lgcc, etc.). To find out which of the object code files have the definition of a specific function, say, _start, disassemble each of them. You’ll find that crt1.o has the definition.

Static linking

Another interesting thing to note in the generated assembly is that the call is to puts@plt, which is not exactly puts. It is in reality a construct called a trampoline, which helps the code jump to the actual printf function during runtime. We need this because printf is actually present in libc.so.6, which the binary simply claims to need by encoding it in the binary. To see this, disassemble the binary using the -x flag:

$ objdump -x helloworld

helloworld:     file format elf64-x86-64
helloworld
architecture: i386:x86-64, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000000400430
...
Dynamic Section:
  NEEDED               libc.so.6
...

This is dynamic linking. When a program is executed, what is actually called first is the dynamic linker (ld.so), which then opens all dependent libraries, maps them into memory, and then calls the _start function in the program. During mapping, it also fills in a table of data called the Global Offset Table with offsets of all of the external references (puts in our case) to help the trampoline jump to the correct location.

If you want to be independent of the dynamic linker, then you can link the program statically:

$ gcc -static -o helloworld helloworld.c

This will however result in bloating of the program and also has a number of other disadvantages, like having to rebuild for every update of its dependent libraries and sub-optimal performance since the kernel can no longer share pages among processes for common code.

BONUS: Writing the smallest program

The basics were done with about 10 minutes to spare, so I showed how one could write the smallest program ever. In principle, the smallest program in C is:

int
main (void)
{
  return 42;
}

As is evident though, this pulls in everything from the C and gcc libraries, so it is clearly hard to do this in C, so lets try it in assembly. We already know that _start is the main entry point of the program, so we need to implement that function. To exit the program, we need to tell the kernel to exit by invoking the exit_group syscall, which has syscall number 231. Here is what the function looks like:

.globl _start
_start:
    mov $0xe7, %rax
    mov $0x42, %rdi
    syscall

We can build this with gcc to get a very small binary but to do this, we need to specify that we don’t want to use the standard libraries:

gcc -o min -nostdlib min.s

The resultant file is 864 bytes, as opposed to the 8.5K binary from the C program. We can reduce this further by invoking the assembler and linker directly:

$ as -o min.o min.s
$ ld -o min min.o

This results in an even smaller binary, at 664 bytes! This is because gcc puts some extra meta information in the binary to identify its builds.

Conclusion

At this point we ran out of time and we had to cut things short. It was a fun interaction because there were even a couple of people with Macbooks and we spotted a couple of differences in the way the linker ran due to differences in the libc, despite having the same gcc installed. I wasn’t able to focus too much on the specifics of these differences and I hope they weren’t a problem for the attendees using Macs. In all it was a satisfying session because the audience seemed happy to learn about all of this. It looked like many of them had more questions (and wonderment, as I had when I learned these things for the first time) in their mind than they came in with and I hope they follow up and eventually participate in Open Source projects to fulfill their curiosity and learn further.

Comments

GNU Tools Cauldron 2016, ARMv8 multi-arch edition

Posted: Sep 28, 2016, 11:49

Worst planned trip ever.

That is what my England trip for the GNU Tools Cauldron was, but that only seemed to add to the pleasure of meeting friends again. I flewin to Heathrow and started on an almost long train journey to Halifax,with two train changes from Reading. I forgot my phone on the trainbut the friendly station manager at Halifax helped track it down andgot it back to me. That was the first of the many times I forgotstuff in a variety of places during this trip. Like I discovered thatI forgot to carry a jacket or an umbrella. Or shorts. Or full lengthpants for that matter. Like I purchased an umbrella from Sainsbury’s but forgot to carry it out. I guess you got the drift of it.

All that mess aside, the conference itself was wonderful as usual. My main point of interest at the Cauldron this time was to try and make progress on discussions around multi-arch support for ARMv8. I have never talked about this in my blog the past, so a brief introduction is in order.

What is multi-arch?

Processors evolve over time and introduce features that can be exploited by the C library to do work faster, like using the vectori SIMD unit to do memory copies and manipulation faster. However, this is at odds with the goal of the C library to be able to run on all hardware, including those that may not have a vector unit or may not have that specific type of vector unit (e.g. have SSE4 but not AVX512 on x86). To solve this problem, we exploit the concept of PLT and dynamic linking.

I thought we were talking about multiarch, what’s a PLT now?

When a program calls a function in a library that it links to dynamically (i.e. only the reference of the library and the function are present in the binary, not the function implementation), it makes the call via an indirect reference (aka a trampoline) within thebinary because it cannot know where the function entry point in another library resides in memory. The trampoline uses a table (called the Procedure Linkage Table, PLT for short) to then jump to the final location, which is the entry point of the function.

In the beginning, the entry point is set as a function in the dynamic linker (lets call it the resolver function), which then looks for the function name in libraries that the program links to and then updates the table with the result. The dynamic linker resolver function can do more than just look for the exact function name in the libraries the function links to and that is where the concept of Indirect Functions or IFUNCs come into the picture.

Further down the rabbit hole - what’s an IFUNC?

When the resolver function finds the function symbol in a library, it looks at the type of the function before simply patching the PLT with its address. If it finds that the function is an IFUNC type (lets call it the IFUNC resolver), it knows that executing that function will give the actual address of the function it should patch into the PLT. This is a very powerful idea because it now allows us to have multiple implementations of the same function built into the library for different features and then have the IFUNC resolver study its execution environment and return the address of the most appropriate function. This is fundamentally how multiarch is implemented in glibc, where we have multiple implementations of functions like memcpy, each utilizing different features, like AVX, AVX2, SSE4 and so on. The IFUNC resolver for memcpy then queries the CPU to find the features it supports and then returns the address of the implementation best suited to the processor.

… and we’re back! Multi-arch for ARMv8

ARMv8 has been making good progress in terms of adoption and it is clear that ARM servers are going to form a significant portion of datacenters of the future. That said, major vendors of such servers with architecture licenses are trying to differentiate by innovating onthe microarchitecture level. This means that a sequence of instructions may not necessarily have the same execution cost on all processors. This gives an opportunity for vendors to write optimal code sequences for key function implementations (string functions for example) for their processors and have them included in the C library. They can use the IFUNC mechanism to then identify their processors and then launch the routine best suited for their processor implementation.

This is all great, except that they can’t identify their processors reliably with the current state of the kernel and glibc. The way to identify a vendor processor is to read the MIDR_EL1 and REVIDR_EL1 registers using the MSR instruction. As the register name suggests, they are readable only in exception level 1, i.e. by the kernel, which makes it impossible for glibc to directly read this, unlike on Intel processors where the CPUID instruction is executable in userspace and is sufficient to identify the processor and its features.

… and this is only the beginning of the problem. ARM processors have a very interesting (and hence painful) feature called big.LITTLE, which allows for different processor configurations on a single die. Even if we have a way to read te two registers, you could end up reading the MIDR_EL1 from one CPU and REVIDR_EL1 from another, so you need a way to ensure that both values are read from the same core.

This led to the initial proposal for kernel support to expose the information in a sysfs directory structure in addition to a trap into the kernel for the MRS instruction. This meant that for any IFUNC implementation to find out the vendor IDs of the cores on the system, it would have to traverse a whole directory structure, which is not the most optimal thing to do in an IFUNC, even if it happens only once in the lifetime of a process. As a result, we wanted to look for a better alternative.

VDSO FTW!

The number of system calls in a directory traversal would be staggering for, say, a 128 core processor and things will undoubtedly get worse as we scale. Another way for the kernel to share this (mostly static) information with userspace is via a VDSO, with an opaque structure in userspace pages in the vdso and helper functionsto traverse that structure. This however (or FS traversal for that matter) exposed a deeper problem, the extent of things we can do in an IFUNC.

An IFUNC runs very early in a dynamically linked program and even earlier in a statically linked program. As a result, there is very little that it can do because most of the complex features are not even initialized at that point. What’s more, the things you can do in a dynamic program are different from the things you can do in a static program (pretty much nothing right now in the latter), so that’s an inconsistency that is hard to reconcile. This makes the IFUNC resolvers very limited in their power and applicability, at least in their current state.

What were we talking about again?

The brief introduction turned out to be not so brief after all, but I hope it was clear. All of this fine analysis was done by Szabolcs Nagy from ARM when we talked about multi-arch first and the conclusion was that we needed to fix and enhance IFUNC support first if we had any hope of doing micro-architecture detection for ARM. However, there is another way for now…

Tunables!

A (not so) famous person (me) once said that glibc tunables are the answer to all problems including world hunger and of course, the ARMv8 multi-arch problem. This was a long term idea I had shared at the Linaro Connect in Bangkok earlier this year, but it looks like it might become a reality sooner. What’s more, it seems like Intel is looking for something like that as well, so I am not alone in making this potentially insane suggestion.

The basic idea here would be to have environment variable(s) todo/override IFUNC selection via tunables until the multi-arch situation is resolved. Tunables initialization is much more lightweight and only really relies on what the kernel provides on the stackand in the auxilliary vector and what the CPU provides directly. It seems easier to delay IFUNC resolution at least until tunables are initialized and then look harder at how much further they can be delayed so that they can use other things like VDSO and/or files.

So here is yet another idea that has culminated into a “just finish tunables already!” suggestion. The glibc community has agreed on setting the 2.25 release as the deadline to get this support in, so hopefully we will see some real code in this time.

Comments

Understanding malloc behaviour using Systemtap userspace probes

Posted: Oct 02, 2014, 22:03

A blog post I wrote on Understanding malloc behaviour using Systemtap userspace probes on the Red Hat Developer Blog has now been published. I got a query about a follow-up post with example usage, which I hope to be able to work on soon-ish.

Comments

Buggy HLE, microcode updates and SIGILLs

Posted: Sep 26, 2014, 09:32

Update: Disabling lock elision in glibc doesn’t seem to be sufficient. Either way, the Fedora kernel folks will have an update in place to update the microcode early by default so that both the kernel and the first instantiation of pthreads will see HLE disabled. So read the story as something interesting that we did but didn’t quite work. It was fun though…

Amit and I ran into an interesting problem today with his new Haswell process based system. A fully updated Fedora 21 alpha would fail during boot and fall into the maintainer shell. The systemd journal showed that systemd-udevd was crashing with a SIGILL, which seemed strange. The core dump revealed the problem:

(gdb) x/i $rip
=> 0x7f68b0b978ba <pthread_rwlock_rdlock+186>:  xbeginq 0x7f68b0b978c0 <pthread_rwlock_rdlock+192>

The xbeginq instruction is an HLE instruction, so the first thing that came to mind was the recent errata that Intel pushed out, effectively announcing that HLE was buggy and that they were going to disable it soon. We looked at /proc/cpuinfo expecting to find hle and rtm missing, but were even more confused to find that they were present.

After much tinkering about, Amit made a vague reference to microcode_ctl being able to change CPU microcode on the fly. It took a while to hit us, but we finally realized that we had found the culprit. microcode_ctl had been updated with the latest Intel microcode update. We initially thought that it ought to be a one-time problem since the microcode would be flashed into the cpu and later everything would work, but then we found out that the microcode needs to be flashed on every boot.

So the root cause was that the microcode would happen late enough that systemd was already up and had read the hle bit, thus enabling lock elision support in systemd. Also, since the kernel had already read in cpu capabilities, it also did not have the updated capabilities, due to which we continued seeing hle and rtm set in cpuinfo.

As a result, thanks to the microcode update, all haswell based F21 alpha systems are essentially unbootable. Carlos is now fixing this by disabling lock-elision completely in the glibc build. Work is in progress for rawhide, F21 and F20 as I write this, so the impact of this will hopefully be minimal. If you do run into this problem, all you have to do is dowwngrade the microcode_ctl package and pin it so that it doesn’t get updated till the glibc update becomes available.

Comments

NOTABUG in glibc

Posted: Sep 18, 2014, 02:16

The glibc malloc implementation has a number of heap consistency checks in place to ensure that memory corruption bugs in programs are caught as early as possible and the program aborted to prevent misuse of the bug. Memory corruption through buffer overruns (or underruns) are often exploit vectors waiting to be ‘used’, which is why these consistency checks and aborts are necessary.

If the heap of a program has been found to be corrupted, the program is terminated with an error that usually looks something like this:

*** glibc detected *** ./foo: double free or corruption (!prev): 0x0000000001362010 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x78a96)[0x7f3df63aea96]
/lib64/libc.so.6(cfree+0x6c)[0x7f3df63b2d7c]
./foo[0x400e7c]
/lib64/libc.so.6(__libc_start_main+0xed)[0x7f3df635730d]
./foo[0x4008f9]
======= Memory map: ========

and when one looks at the core dump, the top of the call stack is all inside glibc:

Program terminated with signal 6, Aborted.
#0  0x00007fd0273b6925 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
64    return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt
#0  0x00007fd0273b6925 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x00007fd0273b8105 in abort () at abort.c:92
#2  0x00007fd0273f4837 in __libc_message (do_abort=2, fmt=0x7fd0274dcaa0 "n not possible due to RF-kill") at ../sysdeps/unix/sysv/linux/libc_fatal.c:198
#3  0x00007fd0273fa166 in malloc_printerr (action=3, str=0x7fd0274daa5e "/proc/self/maps", ptr=) at malloc.c:6332
#4  0x00007fd0273fdf9a in _int_malloc (av=0x7fd027713e80, bytes=) at malloc.c:4673

The common mistake one may make here is to assume that it is a glibc bug because the crash is ‘caused’ by glibc. That is the equivalent of killing the whistleblower. The crash is indeed caused by glibc, but the bug is not in glibc. glibc has only caught the bug after it has happened and halted execution of the program.

And if you think glibc is overstepping its bounds by halting the program, you could tell it to not abort by exporting the MALLOC_CHECK_ environment variable set to either 0 (completely silent) or 1 (prints the message on stderr). Of course, you have to be smoking something very exotic to do that instead of finding and fixing the bug.

Comments