Software Teardowns: Console.WriteLine (Part 2: Unix)

Last time we went through tearing down the Windows implementation of Console.Writeline; and we made it all the way to the closed source of the Win32 API. This time, we’re going through the Unix version of Console.Writeline, and along the way will be able to go deeper since Unix is open source.

We left off at the platform implementation of the ConsolePal class, and if you remember, ConsolePal.Unix.cs is a drop in compile time file replacement.

The UML diagram at this point resembles the following (omitting properties and methods not relevant to this post):

In an effort to move a little faster, I’ll refer to the previous post where code is the same, and explain it here when it isn’t.

In order to write to the console, we must (again) OpenStandardOutput() which looks as follows:

public static Stream OpenStandardOutput()
    return new UnixConsoleStream(SafeFileHandleHelper.Open(() => Interop.Sys.Dup(Interop.Sys.FileDescriptors.STDOUT_FILENO)), FileAccess.Write);

This differs significantly from its Windows OpenStandardOutput() counterpart, starting with a call to SafeFileHandleHelper.Open(func<SafeFileHandle> fdFunc).

namespace Microsoft.Win32.SafeHandles
    internal static class SafeFileHandleHelper
         /* Snip.... */
         /// <summary>Opens a SafeFileHandle for a file descriptor created by a provided delegate.</summary>
        /// <param name="fdFunc">
        /// The function that creates the file descriptor. Returns the file descriptor on success, or an invalid
        /// file descriptor on error with Marshal.GetLastWin32Error() set to the error code.
        /// </param>
        /// <returns>The created SafeFileHandle.</returns>
        internal static SafeFileHandle Open(Func<SafeFileHandle> fdFunc)
            SafeFileHandle handle = Interop.CheckIo(fdFunc());

            Debug.Assert(!handle.IsInvalid, "File descriptor is invalid");
            return handle;

Several things of note about the above code; even though this code is only called from the Unix code, it’s labeled as Microsoft.Win32.Safehandles, and the name of the file is SafeFileHandleHelper.Unix.cs.

The next interesting bit is that this takes in a Funcdelegate comprised of a call to Interop.Sys.Dup(Interop.Sys.FileDescriptors.STDIN_FILENO), which leads us down a new path. Previously we had seen Interoprefer to native windows calls; but since .NET Core is cross platform, it also has to have Interop with *Nix environments. The Dup namespace is new to me, so I’ll spend a moment trying to track down why it’s called that. A quick github search shows me that it’s a wrapper for a SystemNative_Dupcall, which I don’t quite yet understand:

internal static partial class Interop
    internal static partial class Sys
        [DllImport(Libraries.SystemNative, EntryPoint = "SystemNative_Dup", SetLastError = true)]
        internal static extern SafeFileHandle Dup(SafeFileHandle oldfd);

If my understanding holds true, I should be able to look around and find a SystemNative_Dup either in the framework’s CLR, or in a native standard library. (Time to Google again).

I found a pal_io.h (header file), and a pal_io.c that contains the SystemNative_Dup function call. From our last blog post on this subject, we found out that PAL stands for Platform Abstraction Layer; so this native code file handles IO at the PAL level.
This file is located at ./src/Native/Unix/System.Native/pal_io.c.

intptr_t SystemNative_Dup(intptr_t oldfd)
    int result;
    while ((result = fcntl(ToFileDescriptor(oldfd), F_DUPFD_CLOEXEC, 0)) < 0 && errno == EINTR);
    while ((result = fcntl(ToFileDescriptor(oldfd), F_DUPFD, 0)) < 0 && errno == EINTR);
    // do CLOEXEC here too
    fcntl(result, F_SETFD, FD_CLOEXEC);
    return result;

The first bit I want to tear down here is the HAF_F_DUPFD_CLOEXEC preprocessor block. Since this is a preprocessor definition, the code underneath is going to change based on whether that definition is enabled (generally through a compiler directive in Visual Studio, or through a command line flag switch for GCC or MsBuild. A quick search shows that HAVE_F_DPFD_CLOEXEC is defined in one place, but used in two places:

In src/Native/Unix/Common/ (comments added by me):

#pragma once

#cmakedefine PAL_UNIX_NAME @PAL_UNIX_NAME@ //line 3
#cmakedefine01 HAVE_F_DUPFD_CLOEXEC //line 9

The interesting part about this is #cmakedefine01 is a pre-defined (hehe) define in cmake; so it makes sense that they use cmake as part of their build toolchain.

As far as what HAVE_F_DUPFD_CLO_EXECmay mean, there are references to F_DUPFD_CLOEXEC in some linux codebases; particularly in /include/uapi/linux/fcntl.h; which has the following definition:

/* Create a file descriptor with FD_CLOEXEC set. */

And a google search turns up the following documentation for fcntl.h which is short for “File Control”:

Return a new file descriptor which shall be allocated as described in File Descriptor Allocation, except that it shall be the lowest numbered available file descriptor greater than or equal to the third argument, arg, taken as an integer of type int. The new file descriptor shall refer to the same open file description as the original file descriptor, and shall share any locks. The FD_CLOEXEC flag associated with the new file descriptor shall be cleared to keep the file open across calls to one of the exec functions.
Like F_DUPFD, but the FD_CLOEXEC flag associated with the new file descriptor shall be set.

In other words (I think), using this returns a new file descriptor, but the CLOEXEC (Close/Execute?) flag will be set with the new file descriptior. (I think DUP means duplicate?), so with that questioned, we’re back to this line of code:

 while ((result = fcntl(ToFileDescriptor(oldfd), F_DUPFD_CLOEXEC, 0)) < 0 && errno == EINTR);

This while loop does a check against the oldfd, and converts it to a FileDescriptor:

* Converts an intptr_t to a file descriptor.
* intptr_t is the type used to marshal file descriptors so we can use SafeHandles effectively.
inline static int ToFileDescriptorUnchecked(intptr_t fd)
    return (int)fd;

* Converts an intptr_t to a file descriptor.
* intptr_t is the type used to marshal file descriptors so we can use SafeHandles effectively.
inline static int ToFileDescriptor(intptr_t fd)
    assert(0 <= fd && fd < sysconf(_SC_OPEN_MAX));

    return ToFileDescriptorUnchecked(fd);

and when that check has completed, calls the standard library __libc_fcntl() , which calls the native syscall do_fcntl, which has the following function signature:

static long do_fcntl(int fd, unsigned int cmd, unsigned long arg,
		struct file *filp)

The first argument is the type of filedescriptor to pass (by convention a filedescriptor in POSIX land is a non-negative integer). STDOUT, which is what we’d care about, has a FileDescriptor value of 1, or STDOUT_FILENOset to 1 as a constant.

It casts the oldfd to a system appropriate filedescriptor, with the F_DUPFD_CLOEXEC argument set; and starting at 0.

So to recap where we’re at; we’ve crossed from the .NET Core Framework into the native calls necessary to open STDOUT_FILENO and ensure it’s open so we can write to it.

Now that we’ve opened the File Descriptor, we can open a stream containing that File Descriptor; and we’ll do that with UnixConsoleStream; with this line of code:

public static Stream OpenStandardInput()
    return new UnixConsoleStream(SafeFileHandleHelper.Open(() => Interop.Sys.Dup(Interop.Sys.FileDescriptors.STDIN_FILENO)), FileAccess.Read);

The UnixConsoleStream class is an internal class located in the ConsolePal.Unix.csfile. It derives from the Abstract base class ConsoleStream, and it does two particularly linux-y things on instantiation:

internal UnixConsoleStream(SafeFileHandle handle, FileAccess access)
                : base(access)
        Debug.Assert(handle != null, "Expected non-null console handle");
        Debug.Assert(!handle.IsInvalid, "Expected valid console handle");
        _handle = handle;

        // Determine the type of the descriptor (e.g. regular file, character file, pipe, etc.)
        Interop.Sys.FileStatus buf;
        _handleType =
        Interop.Sys.FStat(_handle, out buf) == 0 ?
                (buf.Mode & Interop.Sys.FileTypes.S_IFMT) :
                Interop.Sys.FileTypes.S_IFREG; // if something goes wrong, don't fail, just say it's a regular file

First, it checks with FStat (a native syscall for Unix) to see if status of the file descriptor. Much like all framework calls to native, there’s an Interop class made for this purpose. Here’s the one for FStat:

[DllImport(Libraries.SystemNative, EntryPoint = "SystemNative_FStat", SetLastError = true)]
        internal static extern int FStat(SafeFileHandle fd, out FileStatus output);
internal static class FileTypes
    internal const int S_IFMT = 0xF000;
    internal const int S_IFIFO = 0x1000;
    internal const int S_IFCHR = 0x2000;
    internal const int S_IFDIR = 0x4000;
    internal const int S_IFREG = 0x8000;
    internal const int S_IFLNK = 0xA000;
    internal const int S_IFSOCK = 0xC000;

(This is getting too long for me to teardown DllImportAttribute, so I’ll do that in a future post).
Above this call are the constants also referenced in the code snippet above for UnixConsoleStream , particularly S_IFMT, and S_IFREG, which stands for type of fileand Regular file, respectively. (Honestly I can’t see how S_IFMTstands for type of file; I would have expected “format”. Could someone who does system programming chime in on why it’s named S_IFMT ?

Because Unix and its variants are open source software, we get to actually dive into the C code behind these calls. For fstat; and glibc, the fstat function looks like this:

#include <sys/stat.h>

/* This definition is only used if inlining fails for this function; see
   the last page of <sys/stat.h>.  The real work is done by the `x'
   function which is passed a version number argument.  We arrange in the
   makefile that when not inlined this function is always statically
   linked; that way a dynamically-linked executable always encodes the
   version number corresponding to the data structures it uses, so the `x'
   functions in the shared library can adapt without needing to recompile
   all callers.  */

#undef fstat
#undef __fstat
__fstat (int fd, struct stat *buf)
  return __fxstat (_STAT_VER, fd, buf);

weak_hidden_alias (__fstat, fstat)

This is where things get really complicated. There are multiple versions of fstat; and depending on the version, different code will be called.Or as explained by the man page:

Over time, increases in the size of the stat structure have led to three successive versions of stat(): sys_stat() (slot __NR_oldstat), sys_newstat() (slot __NR_stat), and sys_stat64() (slot __NR_stat64) on 32-bit platforms such as i386. The first two versions were already present in Linux 1.0 (albeit with different names); the last was added in Linux 2.4. Similar remarks apply for fstat() and lstat().

Hello technical debt?

Don’t worry, there’s more:

The glibc stat() wrapper function hides these details from applications, invoking the most recent version of the system call provided by the kernel, and repacking the returned information if required for old binaries.

On modern 64-bit systems, life is simpler: there is a single stat() system call and the kernel deals with a stat structure that contains fields of a sufficient size.

The underlying system call employed by the glibc fstatat() wrapper function is actually called fstatat64() or, on some architectures, newfstatat().

And this is where Unix majorly differs from windows. I’ve ignored it up until now for simplicity, but Unix is not a singular operating system as much as it’s a style of operating system. If you ask someone if they’re running Unix, you’ll also need to ask what variant they’re using: is it a GNU Linux variant? BSD variant? Unix variant? And that doesn’t even tell you everything you need to know if you’re interacting with the OS. You still need to know which C standard library implementation is running: musl, glibc, or some other C library. It working is a testament to software developers.

Back to our UnixConsoleStream. Now that we have OpenStandardOutput() executed, we need to write to it:

public static TextWriter Out => EnsureInitialized(ref s_out, () => CreateOutputWriter(OpenStandardOutput()));

The next step is to create the output writer. This part starts with a call to this private method:

private static TextWriter CreateOutputWriter(Stream outputStream)
return TextWriter.Synchronized(outputStream == Stream.Null ?
       StreamWriter.Null :
       new StreamWriter(
          stream: outputStream,
          encoding: OutputEncoding.RemovePreamble(), // This ensures no prefix is written to the stream.
          bufferSize: DefaultConsoleBufferSize,
          leaveOpen: true) { AutoFlush = true });

The preamble mentioned is that when you write to the console, you don’t want to send a byte-order-mark (BOM) first. That BOM is called a preamble. If you’re writing to a file, this preamble is important, but if you’re writing to the console, it’s not as important(?).

The next part is that AutoFlush is set to true. This is important because when you write to a file, the file is not immediately written to. A buffer fills up, and once that buffer is full it’s “Flushed” to the file. This can cause problems if you’re looking for immediate feedback on a console window, so turning on AutoFlush alleviates that.

The TextWriter.Syncronizedstatic method is located here:

public static TextWriter Synchronized(TextWriter writer)
    if (writer == null)
       throw new ArgumentNullException(nameof(writer));

    return writer is SyncTextWriter ? writer : new SyncTextWriter(writer);

The SyncTextWriter as the name suggests; ensures that writing is syncronized(?), and the only bits that seem strange are a new Attribute here, which is [MethodImpl(MethodImplOptions.Synchronized)]. (Not posting the full source due to its length; but it looks a lot like TextWriter; except it has this attribute). In fact, it’s a child class of TextWriter; and calls the base class’s version of all the methods; while adding the above attribute to the call.

MethodImplOptions.Synchronizedis an enum of compiler flags, and as the comments state this is used when compiling the code to generate certain properties of the method:

namespace System.Runtime.CompilerServices
    // This Enum matchs the miImpl flags defined in corhdr.h. It is used to specify 
    // certain method properties.
    public enum MethodImplOptions
        Unmanaged = 0x0004,
        NoInlining = 0x0008,
        ForwardRef = 0x0010,
        Synchronized = 0x0020,
        NoOptimization = 0x0040,
        PreserveSig = 0x0080,
        AggressiveInlining = 0x0100,
        AggressiveOptimization = 0x0200,
        InternalCall = 0x1000

Unfortunately if I want to go deeper I need to dig into Roslyn. So I’ll do that, but only for a second. I’m out of my depth, so I search for Synchronized; and find this comment, which points me (almost by accident) in the right direction… except now I’m so far out of my depth I’m not sure which way is up. I’m looking for what IL would be generated for a Synchronized method; but can’t find it on my own searching.

Back to the TextWriter (well, SyncTextWriter; but since it calls the base-class methods with special options, we’ll look at TextWriter and just pretend it’s synchronized).

// Writes a string followed by a line terminator to the text stream.
public virtual void WriteLine(string value)
     if (value != null)

The interesting case is that it doesn’t write a null string to the console (I wonder why not?). The first call is to Write(string value) :

public virtual void Write(string value)
    if (value != null)

Which itself calls Write(char[] buffer)

// Writes a character array to the text stream. This default method calls
// Write(char) for each of the characters in the character array.
// If the character array is null, nothing is written.
public virtual void Write(char[] buffer)
    if (buffer != null)
        Write(buffer, 0, buffer.Length);

Which itself calls the version of Write(char[] buffer, int index, int count):

// Writes a range of a character array to the text stream. This method will
// write count characters of data into this TextWriter from the
// buffer character array starting at position index.
public virtual void Write(char[] buffer, int index, int count)
    if (buffer == null)
        throw new ArgumentNullException(nameof(buffer), SR.ArgumentNull_Buffer);
    if (index < 0)
       throw new ArgumentOutOfRangeException(nameof(index), SR.ArgumentOutOfRange_NeedNonNegNum);
    if (count < 0)
        throw new ArgumentOutOfRangeException(nameof(count), SR.ArgumentOutOfRange_NeedNonNegNum);
    if (buffer.Length - index < count)
        throw new ArgumentException(SR.Argument_InvalidOffLen);
    for (int i = 0; i < count; i++) Write(buffer[index + i]);

Now that we’ve covered the innards of what is happening, let’s step back to where this all began, ConsolePal.Unix.cs.

public override void Write(byte[] buffer, int offset, int count)
    ValidateWrite(buffer, offset, count);
    ConsolePal.Write(_handle, buffer, offset, count);

This calls the the base ConsoleStream ValidateWrite method; which does bounds checking on the inputs:

protected void ValidateWrite(byte[] buffer, int offset, int count)
    if (buffer == null)
        throw new ArgumentNullException(nameof(buffer));
    if (offset < 0 || count < 0)
        throw new ArgumentOutOfRangeException(offset < 0 ? nameof(offset) : nameof(count), SR.ArgumentOutOfRange_NeedNonNegNum);
    if (buffer.Length - offset < count)
        throw new ArgumentException(SR.Argument_InvalidOffLen);
    if (!_canWrite) throw Error.GetWriteNotSupported();

And this calls the Unix Specific ConsolePal.Write method, which should send us back down the Unix rabbithole.

ConsolePal.Write(_handle, buffer, offset, count);
/// <summary>Writes data from the buffer into the file descriptor.</summary>
/// <param name="fd">The file descriptor.</param>
/// <param name="buffer">The buffer from which to write data.</param>
/// <param name="offset">The offset at which the data to write starts in the buffer.</param>
/// <param name="count">The number of bytes to write.</param>
private static unsafe void Write(SafeFileHandle fd, byte[] buffer, int offset, int count)
    fixed (byte* bufPtr = buffer)
        Write(fd, bufPtr + offset, count);

Much like before, this safe version ensures the compiler pins the memory locations of the data structures being written, and then it calls the internal version with a byte* to the buffer instead of a byte[]:

private static unsafe void Write(SafeFileHandle fd, byte* bufPtr, int count)
    while (count > 0)
        int bytesWritten = Interop.Sys.Write(fd, bufPtr, count);
        if (bytesWritten < 0)
            Interop.ErrorInfo errorInfo = Interop.Sys.GetLastErrorInfo();
            if (errorInfo.Error == Interop.Error.EPIPE)
                // Broken pipe... likely due to being redirected to a program
                // that ended, so simply pretend we were successful.
            else if (errorInfo.Error == Interop.Error.EAGAIN) // aka EWOULDBLOCK
                // May happen if the file handle is configured as non-blocking.
                // In that case, we need to wait to be able to write and then
                // try again. We poll, but don't actually care about the result,
                // only the blocking behavior, and thus ignore any poll errors
                // and loop around to do another write (which may correctly fail
                // if something else has gone wrong).
                Interop.Sys.Poll(fd, Interop.Sys.PollEvents.POLLOUT, Timeout.Infinite, out Interop.Sys.PollEvents triggered);
                // Something else... fail.
                throw Interop.GetExceptionForIoErrno(errorInfo);

        count -= bytesWritten;
        bufPtr += bytesWritten;

This code then calls the native Write call, checks for system specific behavior around polling, and writes again.

So finally (!) we’ve gotten to the point where .NET Core Framework code calls the native sys_call for Write; and this is the point where it all happens. All of that setup for this.

Let’s recap where we are:
1. User Calls Console.WriteLine(string val);
2. Console.WriteLine calls OpenStandardOutput()
3. OpenStandardOutput()is compiled with different dropins for Windows and Linux; in this cause ConsolePal.Unix.csis compiled and used.
4. ConsolePal.Unix.csensures the correct native syscall is made to open a File Descriptor to FILENO.STDOUT. This syscall is called in a loop because we have to wait until it’s open to continue.
5. Once the native syscall executes; the stream is opened using the same conditional compilation we saw earlier, with UnixConsoleStreambeing created.
6. The UnixConsoleStream is created with the correct filehandle and the access being requested (read or write); and the instantiation checks to ensure the file can be accessed in the manner requested and is available (does it exist?). If so, it writes to the buffer of the file descriptor.
7. The TextWriter is created appropriately; and its Write method will be called.
8. It calls the appropriate stream’s Write method.
9. The UnixConsoleStream calls its internal ValidateWritemethod, and then its internal unsafe Writemethod; which calls another version of Write that takes in a pointer to the bytes being passed in.
10. It’s at this point where we call the native Writemethod and actually write out to the system console.

Now we have to find the syscall in the .NET Core Framework code. Since the calls follow a naming convention of SystemNative_<call>, I’ll look for SystemNative_Write, and sure enough, I find it.

[DllImport(Libraries.SystemNative, EntryPoint = "SystemNative_Write", SetLastError = true)]
internal static extern unsafe int Write(int fd, byte* buffer, int bufferSize);

This calls a .c class called pal_io.c, and in particular its SystemNative_Writefunction:

int32_t SystemNative_Write(intptr_t fd, const void* buffer, int32_t bufferSize)
    assert(buffer != NULL || bufferSize == 0);
    assert(bufferSize >= 0);

    if (bufferSize < 0)
        errno = ERANGE;
        return -1;

    ssize_t count;
    while ((count = write(ToFileDescriptor(fd), buffer, (uint32_t)bufferSize)) < 0 && errno == EINTR);

    assert(count >= -1 && count <= bufferSize);
    return (int32_t)count;

This block is interesting in that it does the checking; and then it calls the syscall write note lowercase) in a while loop. That writecall is dependent on which C library in use; I’m referencing glibc since I know it’s pretty common.
Because I’ve been looking at glibc, I know its prepends its syscalls with two underscores; so I use that to find the write (right) function:

/* Write NBYTES of BUF to FD.  Return the number written, or -1.  */
__libc_write (int fd, const void *buf, size_t nbytes)
  if (nbytes == 0)
    return 0;
  if (fd < 0)
      __set_errno (EBADF);
      return -1;
  if (buf == NULL)
      __set_errno (EINVAL);
      return -1;

  __set_errno (ENOSYS);
  return -1;
libc_hidden_def (__libc_write)
stub_warning (write)

weak_alias (__libc_write, __write)
libc_hidden_weak (__write)
weak_alias (__libc_write, write)
libc_hidden_weak (write)

And it’s at this point where I’ve gone about as far as I can without knowing more magic behind the way glibcis set up. I would have expected to see the buffer being written here; but all I see are processor definitions after a serious of ifstatements, none of which look like they do anything with the buffer.

I’ve enjoyed this dive into how a string is written to the console. I’d love to hear from you if you have picked up a thread I’m missing here; particularly around the syscall write and how that magic works.

Leave a Reply