Sterling has too many projects

Blogging about Raku programming, microcontrollers & electronics, 3D printing, and whatever else...

»

I hope this calendar has been of some use to you all. In any case:

Merry Christmas!

»

Lots of big words in the title. In simpler terms, it means running a program in the background and interacting with it as input and output becomes available. The tool in Raku for doing this work is called Proc::Async. If you’ve ever dealt with the pain of trying to safely communicate with an external process, writing to its input and reading from its output and error streams and hated it, I think you’ll like what Raku has built-in.

First, let’s contrive a problem. Let’s make an external program that takes strings as input and reverses them and outputs them on the standard output. Meanwhile, it reports on standard error whether or not the given string is a palindrome or not. We could write this program like so:

for $*IN.lines -> $line {
    my $rline = $line.flip;
    say $rline;
    note ($rline eq $line);
}

And just to make it clear, for this sample input:

spam
slap
tacocat

We will get this output:

maps
False
pals
False
tacocat
True

The True and False lines being standard error and the other lines being on standard output. Clear? Good.

Next, to finish our contrived problem, we need to interact with this program and just write out a message like tacocat is a palindrome! because it is exciting whenever we see a palindrome. We want to output nothing otherwise. Let’s use Proc::Async to interact with our other program, we’ve called palindromer for giggles.

react {
    my $palindromes = Supplier.new;
    my $p = Proc::Async.new: './palindromer', :w;
    my @rlines;
    my @palindrome-checks;

    # Echo our own input
    whenever $*IN.Supply.lines -> $line {
        $p.say: $line;

        # Let palindromer know when we run out of input
        LAST { $p.close-stdin }
    }

    # Watch for the reverse lines on palindromer's standard output
    whenever $p.stdout.lines -> $rline {
        if @palindrome-checks.shift -> $is-palindrome {
            $palindromes.emit: $rline;
        }
        else {
            push @rlines, $rline
        }
    }

    # Watch for the True/False output from palindromer's standard error
    whenever $p.stderr.lines -> $is-palindrome {
        if @rlines.shift -> $rline {
            $palindromes.emit: $rline if $is-palindrome eq 'True';
        }
        else {
            push @palindrome-checks, $is-palindrome;
        }
    }

    # PALINDROMES ARE EXCITING!
    whenever $palindromes.Supply -> $palindrome {
        say "$palindrome is a palindrome!";
    }

    # Quit when palindromer quits
    whenever $p.start { done }

}

Now, if we pipe the same input as before into our new program, we should get this output:

tacocat is a palindrome!

Our code deals with the potential problem of standard output and standard error getting out of sync by using a queue to accumulate extra values from each side. We feed the discovered palindromes into a central Supplier named $palindromes so we can have a single place for printing the palindromes we find.

A couple key points to be aware of when using Proc::Async:

  1. Always put the .start call after your tap on standard output and standard error. Otherwise, there could be problems with missed lines (i.e., lines emitted before you are listening). Raku will warn you if it detects that you have done this, by the way. This method returns a Promise that is kept when the program is finished.
  2. Make sure you use .close-stdin when you are finished with input to make sure the other program knows that you’re done.

Otherwise, there are a few more interesting features, such as .ready amongst others, which you might want to read more about in the Raku reference documentation online.

Cheers.

»

Raku actually provides two different locking classes. A Lock object provides a very standard locking mechanism. When .lock and .unlock are used or .protect is called, you get a section of code that pauses until the lock frees up, runs while holding the lock, and then frees the lock so other code that might be waiting on the lock can run.

However, the Lock class works in such a way that blocks the current thread. As I’ve pointed out earlier in this advent calendar, the purpose of threads is to do stuff, so blocking them from running is preventing them from fulfilling their purpose. Luckily, there is a solution.

In cases where you want locking, but don’t want to burn your threads waiting for the lock to free up, consider using Lock::Async instead of Lock. It works very similarly to Lock, but the .lock method does not block. Instead, it returns a Promise which will be kept when the lock is free. Code that awaits that Promise will pause in a way that allows Raku to reuse the current thread for another task:

class SafeQueue {
    has @!queue;
    has Lock::Async $!lock .= new;

    method enqueue($value) {
        await $!lock.lock;
        push @!queue, $value;
        $!lock.unlock;
    }

    method dequeue(--> Any) {
        $!lock.protect: { shift @!queue }
    }
}

The code above demonstrates both the use of .lock and .unlock as well as .protect. You should always prefer .protect as the code in enqueue above might leave the lock forever held if an exception gets thrown after the lock is acquired. From the point of view of your program, a .protect will behave similarly between Lock and Lock::Async, but internally performs an await on the .lock method. This means that the thread the code is running on will be freed to be used by another task that is waiting to be scheduled.

Cheers.

»

What’s more asynchronous than socket communication? When two programs need to talk to each other, often from different computers on different networks in different parts of the world, you can connect using a socket. Whether an HTTP server or some custom protocol, you can implement both sides of that communication using IO::Socket::Async.

Let’s consider a simple calculator service. It listens for connections over TCP. When a connection is established, it takes lines of input over the connection and parses each line as a simple mathematic calculation like 2 + 2 or 6 * 7.

We can write the server like this.

react {
  whenever IO::Socket::Async.listen('127.0.0.1', 3456) -> $conn {
    whenever $conn.Supply.lines -> $line {
      if $line ~~ m:s/$<a> = [ \d+ ] $<op> = [ '+' | '-' | '*' | '/' ] $<b> = [ \d+ ]/ {
        my $a = +$<a>;
        my $b = +$<b>;

        my $r = do given "$<op>".trim {
          when '+' { $a + $b }
          when '-' { $a - $b }
          when '*' { $a * $b }
          when '/' { $a div $b }
          default { "Unknown Error"; }
        }

        $conn.print("$r\n");
      }
      else {
        $conn.print("Syntax Error\n");
      }
    }
  }
}

Now, the nested whenever block might look a little odd, but this is just fine. You can add more whenever blocks within a react any time you need to this way.

The outer whenever listens for new new connection objects. It’s sole job here is to register the connection as another whenever block with the server. Be aware that using this strategy does mean that you are handling all connections asynchronously as if from a single thread. A more scalable solution might be to use a start block for each arriving connection (which might look something like this):

start react whenever $conn.Supply.lines -> $line { ... }

Moving on, the inner whenever then watches for lines of input from each connection as it arrives. It will receive a message whenever a line of input has been sent by the client. This code parses that line, performs the expression (or discovers an error), and returns the result.

Simple.

We call listen to establish a listening TCP socket on the named address and port number. This returns a Supply which will emit connected IO::Socket::Async objects. You can use the Supply method on the connection object to get text (or bytes via the :bin option), whenever the are sent from the associated client. You use the write and print methods to send bytes and text back.

The client can also be written with IO::Socket::Async as well. Here is a client which uses our expression server to calculate the Fibonacci sequence:

my ($a, $b) = (0, 1);
say "$a";
say "$b";
await IO::Socket::Async.connect('127.0.0.1', 3456).then: -> $connection {
  given $connection.result -> $conn {
    $conn.print("$a + $b\n");

    react {
      whenever $conn.Supply.lines -> $line {
        $a = $b;
        $b = +$line;
        say "$b";
        $conn.print("$a + $b\n");
      }
    }
  }
}

When making a client conneciton, we use the connect method with IP address or host name of the server to connect to. That method returns a Promise which is kept once the connection has been made. The result of that promise is a connected IO::Socket::Async object which can be used in precisely the same way as the server, with Supply returning text or bytes and write and print being used to send text or bytes.

Cheers.

»

Iteration is slow. If you have N things to process in a loop, your loop will take N iterations to process. Slow. Sometimes that’s the only way, though, to solve a problem.

For example, let’s consider the case where we have a JSON log and we want a command to read each line, parse the JSON for that log, and summarize it showing the time stamp and message:

use JSON::Fast;
my $log-file = 'myapp.log'.IO;
for $log-file.lines -> $line {
  my %data = from-json($line);
  say "%data<timestamp> %data<message>";
}

If you have multiple cores on your system (and who doesn’t in 2019?), you can actually speed this up a little bit with a small change:

use JSON::Fast;
my $log-file = 'myapp.log'.IO;
race for $log-file.lines -> $line {
  my %data = from-json($line);
  say "%data<timestamp> %data<message>";
}

The race prefix added to any loop will result in the items being iterated as quickly as possible on the available cores. On my machine, for a short 10,000 line log with only these two fields in it results in about a 25% time savings. However, this comes with a consequence: the original order of the lines is no longer preserved. In some cases, this might not matter, but in others it does matter.

Now, there is another prefix we could use that preserves order, called hyper. However, in this particular case, it won’t work. Why? Because hyper only guarantees the results will be output in order, but here we are outputting the results as the code is run. This is something to be very careful of whenever working with these keywords.

However, this is easy to fix. You just need to eliminate the side-effects and make your for loop functional:

use JSON::Fast;
my $log-file = 'myapp.log'.IO;
my $output-lines = hyper for $log-file.lines -> $line {
  my %data = from-json($line);
  "%data<timestamp> %data<message>";
}
.say for @$output-lines;

Now, we get most of the speedup from parallelizing the parsing of JSON lines, but we can output in the same order as the original file. This works because the output of a for loop with a hyper or race prefix works just like do: the result is a sequence that we can iterate. In this case, it’s a HyperSeq which makes sure Raku handles the multi-threading bits correctly.

Cheers.

»

The goal of today’s article is to consider when you want to run your tasks simultaneously and how to do that. I am not going to give any rules for this because what works one time may not work the next. Instead, I will focus on sharing some guidelines that I have learned from personal experience.

Remember Your Promises

Whenever you use concurrency, you want to hold on to the related Promise objects. They are almost always the best way to rejoin your tasks, to cause the main thread to await completion of your concurrent tasks, etc.

A common pattern I see in my code looks like this:

my $gui-task = start { ... }
my $console  = start { ... }
my $jobs     = start { ... }
await Promise.allof($gui-task, $console, $jobs);

Just like that, I have three tasks running in three different threads and the main thread is holding until the three tasks complete.

That await is also where you want to add your CATCH blocks as that’s the point at which exceptions from the other threads will rejoin the calling thread.

The Main Thread is Special

When you write your concurrent program, be aware that the main thread is special. It will not be scheduled to run a task and your program will continue to run as long as it is doing something or awaiting on something. As soon as the main thread exits, your other tasks will immediately be reaped and quit.

Prefer a Single Thread for Input or Output

Avoid sharing file handles or sockets between threads. Only a single thread can read or write to a single handle at a time. The easiest way to make sure you do that safely is to keep that activity in a single thread. On a multi-threaded program where any thread may output to standard output or standard error, I often invoke a pattern like the following:

my Supplier $out .= new;
my Supplier $err .= new;
start {
    react {
        whenever $out { .say }
        whenever $err { .note }
    }
}
start { 
    for ^10_000 { $out.emit: $_ }
}
start {
    for ^10_000 { $out.emit: $_ }
}

If you don’t employ a pattern like that, your program will probably still work, but you may end up with some strange oddities with your output.

Raku Data Structures are Not Inherently Safe

Similar to what was said in the previous section for input and output, please note that most Raku data structures are not thread safe. If you want to use a data structure across threads, you must use some strategy for making that access thread safe. Some strategies that will work are:

  • Use a thread that manages access to that data structure, as was done above for standard output and standard error.

  • Use a monitor pattern to secure the data structure.

  • Make use of cas, Lock, Lock::Async, Semaphore, Promise, or one of the other locking mechanisms available to guard access to the object as appropriate.

  • Manage the modifications to the object using a Channel or Supply.

Whatever you do, do not assume access to an object is thread safe unless thread safety is explicitly part of the design of the object.

Use a Task per GUI Event Loop or Window

If your application has a GUI, you almost certainly want a separate thread for managing input and output with the GUI. Most GUI libraries have a built in event loop already and you want to run that as a task in a separate loop. You may want a single task for your whole GUI or you may want a separate task per Window.

Batch Small Tasks

You do not always want to have a single task for every action. Some actions are simply too trivial and the execution is too short to manage this. What is a reasonable size of task is really up to you and your execution environment. Just be aware that running your tasks in batches is often a better strategy than running them in tiny bits when the processing involved is trivial.

If you use the hyper or race keywords or methods to parallelize work, batching is built-in and automatic. You may want to experiment with the parameters to see if tuning the batch sizes of your task results in speed increases.

Break Larger Tasks into Smaller Ones

Some concurrent tasks you just want to run continuously as CPU time is available or trigger whenever an event comes available. However, single run tasks that run long can sometimes benefit from being broken down into smaller ones. There are only a finite number of tasks that can run simultaneously and breaking them down can help make sure that the CPU stays busy.

One easy way to break down your tasks is into insert await statements in natural places. As of Raku v6.d, you can effectively turn your tasks into coroutines by pausing with an await until a socket is ready, more data comes in, a signal arrives from a Promise or Channel, etc. Remember that any time Raku encounters an await, it is an opportunity for Raku to schedule work for another task on the thread the current task is using.

Beware of Thread Limitations

There are a limited number of threads available. If you have a task with the potential to run a large number of tasks, take some time to consider how the tasks are broken down. Limiting the number of dependencies between tasks will allow your program to scale efficiently without exhausting resources.

Any time your tasks must pause for input or for whatever reason, making sure to do that with an await will ensure that the maximum number of threads are ready for work.

Avoid Sleep

I consider sleep to be harmful. Instead, prefer await Promise.in(...) as that gives Raku the ability to reuse the current thread for another task. Only use sleep when you deliberately want to lock up a thread during the pause. I make use of sleep in this Advent calendar mostly because it is more familiar. In practice, I generally only use it on the main thread.

Conclusion

Much of this advice overlaps the advice on breaking up async problems. I hope this provides some useful guidelines when writing concurrent programs.

Cheers.

»

As I have said several times before in this calendar, it is always best to avoid sharing state between running threads. Again, however, here is yet another way to share state, when you need to do it.

A few days ago, we considered monitors as a mechanism for creating a thread-safe object. Let’s consider the following monitor:

class BankBalanceMonitor {
    has UInt $.balance = 1000;
    has Lock $!lock .= new;

    method deposit(UInt:D $amount) {
        $!lock.protect: { $!balance += $amount };
    }

    method withdraw(UInt:D $amount) {
        $!lock.protect: { $!balance -= $amount };
    }
}

The day after that we considered the compare-and-swap operation, a.k.a. cas, and how to use it with any scalar variable in Raku. By using cas, we can actually create thread safe objects without using locks at all.

Thus, we can rewrite the above class as a lock-free data structure like this:

class BankBalanceLockFree {
    has UInt $.balance = 1000;

    method deposit(UInt:D $amount) {
        cas $!balance, -> $current { $current + $amount };
    }

    method withdraw(UInt:D $amount) {
        cas $!balance, -> $current { $current - $amount };
    }
}

That’s it. Same protections, but now we’ve made use of the scalar CAS operation instead. This can be more efficiently than locking. But why?

Locks have a cost at the beginning and end every time the lock is encountered. Add to this the fact that the every critical section is a bottle neck where a multi-threaded system must become single-threaded for a moment. Whereas CAS has no particularly expensive operations, but might cause the critical section to re-run multiple times.

Let’s consider the extremes of two variables in our system: contention and run time. Contention is a generic term describing the number of threads needing to work in the critical section at once. Run time here describes how long it takes to run the operation inside the critical section.

If an operation has low contention and short run time, CAS is almost certain to perform better. Locks have high overhead at start and end, whereas CAS is going to have almost no overhead. With low contention we might have to repeat an operation every now and then, but the operation is fast, so it doesn’t matter.

If an operation has high contention and short run time, CAS is still likely to win. You could end up with a thread or two having to repeat the operation several times, but with a larger number of threads a lock’s enforcement of a single-threaded bottleneck does not scale well.

If an operation has low contention and long run time, CAS might be a loser. If the critical section really must take hundreds of milliseconds or even longer, repeats are likely to be more costly. It may be worth A/B testing to see which wins, though.

If an operation has high contention and long run time, locks may win. At this point, however, it becomes less clear if your operation is really scalable across multiple threads at all. The bottleneck of locking for a long time on many competing threads essentially reduces your application to single-threaded. It might be time to consider how you can speed up the operation or do it in a way that doesn’t involve shared state.

Cheers.

»

In Raku, a Supply is one of the primary tools for sending messages between threads. From the way a Supply is structured, it is obvious that is provides a means for one ore more tasks to send events to multiple recipient tasks. What is less obvious, however, is that a Supply imposes a cost on the sender.

Consider this program:

my $counter = Supplier.new;
start react whenever $counter.Supply {
    say "A pre-whenever $_";
    sleep rand;
    say "A post-whenever $_";
}
start react whenever $counter.Supply {
    say "B pre-whenever $_";
    sleep rand;
    say "B post-whenever $_";
}
start for 1...* {
    say "pre-emit $_";
    $counter.emit($_);
    say "post-emit $_";
}
sleep 10;

Here we have three tasks running, each in a separate thread. We let the main program quit after 10 seconds. The first threads two receive messages from the $counter.Supply. The third thread feeds a sequence of integers to this supply. You might be tempted to think that the final task will race through the delivery of events, but if so, you’d be wrong.

Consider the output of this program:

pre-emit 1
A pre-whenever 1
A post-whenever 1
B pre-whenever 1
B post-whenever 1
post-emit 1
pre-emit 2
A pre-whenever 2
A post-whenever 2
B pre-whenever 2
B post-whenever 2
post-emit 2

Notice a pattern? Even though there’s a random wait in the first two threads and no wait at all in the third, the third thread is blocked until both of the other threads complete. This behavior is the same regardless of how the Supply is tapped, i.e., it does not matter if you use whenever blocks or call .tap.

Therefore, if you want your emitter to blast through events as quickly as possible, you need to make sure the taps are written to finish as soon as possible or consider a different solution, such as using a Channel which will queue tasks in memory and they will get processed whenever the thread listening to that channel has time to process them.

Just be aware of this back pressure cost whenever using a Supply. The sender always pays.

Cheers.

»

In Raku we have a couple basic ways of getting at the events emitted from a Supply, which begs the question, what’s the difference between each? I want to answer that question by creating a react block with a couple intervals and then emulate the same basic functionality using tap.

Let’s start with our base react block:

sub seconds { state $base = now; now - $base }
react {
    say "REACT 1: {seconds}";

    whenever Supply.interval(1) {
        say "INTERVAL 1-$_: {seconds}";
        done if $_ > 3;
    }

    say "REACT 2: {seconds}";

    whenever Supply.interval(0.5) {
        say "INTERVAL 2-$_: {seconds}";
    }

    say "REACT 3: {seconds}";
}

The seconds routine is just a helper to give us time in seconds from the start of the block to work from. The output from this block will typically be similar to this:

REACT 1: 0.0011569
REACT 2: 0.0068571
REACT 3: 0.008015
INTERVAL 1-0: 0.0092906
INTERVAL 2-0: 0.0101116
INTERVAL 2-1: 0.5103139
INTERVAL 1-1: 1.007995
INTERVAL 2-2: 1.022309
INTERVAL 2-3: 1.5124228
INTERVAL 1-2: 2.0137509
INTERVAL 2-4: 2.014717
INTERVAL 2-5: 2.517795
INTERVAL 1-3: 3.016291
INTERVAL 2-6: 3.0182612
INTERVAL 2-7: 3.521018
INTERVAL 1-4: 4.0182113

So what’s it mean? Well, first thing to note is that all the code in the react block itself runs first. That is, it runs all the commands, including each whenever block to register the event taps for each Supply, but not to run the code yet. Once the react block finishes running, it blocks until either all the whenever blocks are done or the done statement is encountered. At that point, all the supplies are untapped and execution continues.

By the way, if you want to have a block run after a react block has completed (or a supply block for that matter), you can use the special CLOSE phaser. A LEAVE phaser will exit immediately when the code in the react block finishes setting up the react.

Aside from that, it must be noted that everything related to the react block will only run in sequence. Raku doesn’t promise to run it in a single thread, but it does promise that no two parts of the code inside of a react block will run concurrently. This includes the first run through executing the react block itself as well as executing the whenever blocks in reaction to emitted values to supplies.

So, how would we go about this behavior using .tap? We could do it like this:

sub seconds { state $base = now; now - $base }
REACT: {
    say "REACT 1: {seconds}";

    my $ready = Promise.new;
    my $mutex = Lock.new;
    my $finished = my $done = Promise.new;

    my $interval1 = Supply.interval(1).tap: {
        await $ready;
        $mutex.protect: {
            say "INTERVAL 1-$_: {seconds}";
            $done.keep if $_ > 3;
        }
    }

    $finished .= then: -> $p {
        $interval1.close;
    }

    say "REACT 2: {seconds}";

    my $interval2 = Supply.interval(0.5).tap: {
        await $ready;
        $mutex.protect: {
            say "INTERVAL 2-$_: {seconds}";
        }
    }

    $finished .= then: -> $p {
        $interval2.close;
    }

    say "REACT 3: {seconds}";

    $ready.keep;
    await $finished;
}

This is similar to what the react block is actually doing, but with several additional manual steps. First we must prepare a couple of promises. The $ready Promise is kept at the end of the “REACT” block to release the taps to do their work. The $done Promise is where we hold the main thread until execution is complete.

I have not implemented the additional logic of automatically keeping $done if all supplies become done. Doing so could be done by creating another Promise for each tap that is kept when the tap done block is executed. A .then block could be attached to a Promise.allof() promise for all those promises. I leave solving that as an exercise for the reader.

The other major addition is the $mutex Lock object. This prevents the individual tap blocks from running simultaneously.

That should be enough. This is probably not the most efficient solution, but it does demonstrate the extra help the react block gives you. You may notice that the tap version is ever so slightly faster. This should not be a surprise. This tap version is not taking as much care and organization as the react block. Therefore, if eeking out a few extra milliseconds matters to your code, you may want to consider implementing your async coordination code directly using tap and some other tools rather than a react block. However, be aware that the react block is likely saving you a pile of headaches in debugging by doing all those fiddly little details for you.

And one final note, the documentation of the act method states that it works like tap, but the given code is executed by only one thread at a time. I’m really uncertain as to what this really means because this same basic guarantee is inherent to tap as well. This is because a Supply is unable to continue with another emitted message until all taps have finished running. In practice, taps all run synchronously for each message too. I haven’t found any evidence in all my work that taps on a given supply ever run concurrently. Anyway, if someone can go on to the Reddit thread for this post and explain what the actual difference is between tap and act, I would appreciate it.

Cheers.

»

A semaphore is a system of sending messages using flags. Oh wait, that’s what a semaphore is outside of computing. Among computers, a semaphore is like a kind of lock that locks after being acquired N times. This is useful for situations where you have a resource of N items, want to quickly distribute them when you know they are available, and then immediately block until a resource has been released. Raku provides a built-in Semaphore class for this reason:

class ConnectionPool {
    has @.connections;
    has Semaphore $!lock;

    submethod BUILD(:@!connections) {
        $!lock .= new(@!connections.elems);
    }

    method use-connection() {
        $!lock.acquire;
        pop @!connections;
    }

    method return-connection($connection) {
        push @!connections, $connection;
        $!lock.release;
    }
}

Here we have a connection pool where we can quickly and safely pull entries out of the stack of connections. However, as soon as the last connection has been pulled, the .use-connection method will block until a connection is returned using the .return-connection.

There is an additional .try_acquire method that can be used instead of .acquire, which returns a Bool that determines success or failure. For example, we might have a buffer for key presses that we want to fail if it fills up, rather than continuing to store key events:

class KeyBuffer {
    has UInt $.size;
    has UInt $!read-cursor = 0;
    has UInt $!write-cursor = 0;
    has byte @!key-buffer;
    has Semaphore $!buffer-space;
    has Semaphore $!lock .= new(1);

    submethod BUILD(UInt :$!size) {
        @!key-buffer = 0 xx $!size;
        $!buffer-free .= new($!size);
    }

    method !increment-cursor($cursor is rw) {
        $cursor++;
        $cursor %= $!size;
    }

    method store(byte $key) {
        $!buffer-space.try_acquire or die "buffer is full!"

        $!lock.acquire;
        LEAVE $!lock.free;

        @!key-buffer[ $!write-cursor ] = $key;
        self!increment-cursor($!write-cursor);
    }

    method getc(--> byte) {
        my $result = 0;

        $!lock.acquire;
        LEAVE $!lock.release;

        if $!read-cursor != $!write-cursor {
            $result = @!key-buffer[ $!read-cursor ];
            self!increment-cursor($!read-cursor);

            $!buffer-space.release;
        }
        
        $result;
    }
}

This data structure uses two Semaphores. One, named $!lock, is used in the same way a Lock works to guard the critical sections and make sure they are atomic. The other, $!buffer-space, is used to make sure the write operations fail when the buffer fills up.

As you can see, we use .try_acquire to acquire a resource from the Semaphore. If that method returns False, we throw an exception to let the caller know the operation failed. If the method returns True, then we have acquired permission to add another entry to the buffer. When we read from the buffer, we still use .release to mark the space available again.

I’ve used Semaphore for the mutual exclusion lock because it can be use that way and that’s what we’re talking about. However, the protect method of Lock or Lock::Async may be a better choice here as you don’t need to be careful to make sure .release gets called as the .protect block takes care of that for you. With that said, a LEAVE phaser is a good way to make sure .release is called as LEAVE phasers will be called no matter how the block exits (i.e., it runs even on an exception).

It should be noted that if an exception happens in the .getc method above after the $!read-cursor is incremented, but before $!buffer-space.release is called, you could end up with the buffer in a bad state where it no longer has as much space. As such, an improvement that might be worth doing is making sure that exceptions in that if-block are caught and dealt with if such an exception is possible.

A general thing to keep in mind is that whenever dealing with concurrency, the seemingly trivial edge cases can easily become important. Sometimes it becomes important in unforeseen ways.

Cheers.