Goroutine, Concurrency and Parallelism

Concurrency is not Parallelism. Parallelism is when two or more threads are executing code simultaneously against different processors. If you configure the runtime to use more than one logical processor, the scheduler will distribute goroutines between these logical processors which will result in goroutines running on different operating system threads. However, to have true parallelism you need to run your program on a machine with multiple physical processors. If not, then the goroutines will be running concurrently against a single physical processor, even though the Go runtime is using multiple logical processors.

About the Goroutine scheduler

There are 3 usual models for threading. One is N:1 where several userspace threads are run on one OS thread. This has the advantage of being very quick to context switch but cannot take advantage of multi-core systems. Another is 1:1 where one thread of execution matches one OS thread. It takes advantage of all of the cores on the machine, but context switching is slow because it has to trap through the OS.

Go tries to get the best of both worlds by using a M:N scheduler. It schedules an arbitrary number of goroutines onto an arbitrary number of OS threads. You get quick context switches and you take advantage of all the cores in your system. The main disadvantage of this approach is the complexity it adds to the scheduler.

To accomplish the task of scheduling, the Go Scheduler uses 3 main entities: M, P and G.

The M represents an OS thread. It's the thread of execution managed by the OS and works pretty much like your standard POSIX thread.

The G (goroutine) represents a goroutine. It includes the stack, the instruction pointer and other information important for scheduling goroutines, like any channel it might be blocked on.

The P (processor) represents a context for scheduling. You can look at it as a localized version of the scheduler which runs Go code on a single thread. It's the important part that lets us go from a N:1 scheduler to a M:N scheduler.

Look at the following code:

package main

import (
    "fmt"
    "runtime"
    "sync"
)

func main() {
    runtime.GOMAXPROCS(1)
    wg := sync.WaitGroup{}
    wg.Add(20)
    for i := 0; i < 10; i++ {
        go func() {
            fmt.Println("go routine 1 i: ", i)
            wg.Done()
        }()
    }
    for i := 0; i < 10; i++ {
        go func(i int) {
            fmt.Println("go routine 2 i: ", i)
            wg.Done()
        }(i)

    }
    wg.Wait()
}

The output of above code like this:

go routine 2 i:  9
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 1 i:  10
go routine 2 i:  0
go routine 2 i:  1
go routine 2 i:  2
go routine 2 i:  3
go routine 2 i:  4
go routine 2 i:  5
go routine 2 i:  6
go routine 2 i:  7
go routine 2 i:  8

Concurrency in Go means that some of the functions in the code can run at the same time logically, but will not necessarily run at the same time in physically. The code runtime.GOMAXPROCS(1) make the value of P is 1, that make all goroutines bound to the same P. When a new G is created or an existing G becomes runnable, it is pushed onto a list of runnable goroutines of current P. When P finishes executing G, it first tries to pop a G from own list of runnable goroutines; if the list is empty, P chooses a random victim (another P) and tries to steal a half of runnable goroutines from it. So it will be output goroutine2 first.

When we are using goroutines on loop iterator variables, the i variable in the first loops isn't passed as an argument to the anonymous function. Because of sharing memory between gorounties, the value of variable will be modified that goroutine read the pointer address, the value of i is 10 after loop when goroutine1 execute. In the second loop, by adding i as a parameter to the closure, i is evaluated at each iteration and placed on the stack for the goroutine, so each slice element is available to the goroutine when it is eventually executed, so the variable are not shared between iterations.

Reference

Rob Pike: Go Concurrency Patterns
Rob Pike: Concurrency is not Parallelism
Dmitry Vyukov: Scalable Go Scheduler Design Doc
Wiki: CommonMistakes
Wiki: LearnConcurrency

5.00 avg. rating (98% score) - 1 vote