I recently finished "The Go Programming Language". It is very well written, in-depth book. The learning curve is very smooth, I think even relatively new programmers may use this book as intro to their first programming languages. Otherwise, check out O'Reilly "Introducing Go: Build Reliable, Scalable Programs", which I think is much friendlier but covers less.

Here is a raw list of some interesting bits that I learned primarily from "The Go Programming Language" I collected for my future reference.

* there are constants, variables (addressable values) and non-addressable values (e.g. anonymous structs, pointer to return of a function). Variable is something that stores value and has a name (e.g. x) or can deduce name from the left hand side ( or x[i])
* lexical scope — piece of code where symbols can collide and resolved (function body, if, for, switch statements, package)
* can override variable names in nested lexical scopes
* package is the most outer lexical scope
* lifetime of a variable != lexical scope

* there is no distinction between heap and stack
* operator address-of "&" can be applied only to variables
* pointer is an address of a variable. All variables have addresses and can have pointer, but not all values have pointers
* pointer is not a numerical address of OS level memory
* pointer can be used to compare to null, compare to other pointer, dereference, or get address of variable. There is no pointer arithmetic
* it is perfectly fine to return address of a local variable in function
* function "new" creates new unnamed variable and returns its pointer (it is same as returning reference to local variable in function)

* swapping variables: "i, j = j, i" also can do "i, j, k = j, k, i" etc.
* flag package has nice API for arguments used to run program

* if package has "init" function it will be run once for go program when it is first time imported
* numeric types compare only within type (1.(int64) != 1.(int32))

* all Go source code is in UTF-8
* strings inside Go code are string-literals, and they are in UTF-8 too
* type string = byte[]
* UTF-8 is multi-byte encoding that is fully compatible with ASCII. It can represent all languages and more. It is efficient and can represent a lot of numbers. Sorting UTF-8 by bytes is lexicographical sorting. It has a number other benefits too. Few of co-authors of Go are co-authors of UTF-8. There is also UTF-16 and UTF-32, they are simpler but less memory efficient.
* use runes to work with UTF-8 strings
* use bytes[] for non UTF-8 strings, handle encoding yourself of by 3rd party module
* modules strings and unicode are dealing with UTF-8
* watch out for filepath, since filepath has to be in encoding of OS, which can vary. For example it can use Korean or Japanese non UTF-8 encoding

* constants are untyped, which allows to use "1" and determine at compile time if it is int64, uint64, float, etc. 
* constant expression iota allows to iterate in definition of consts. Useful for enums and bitsets
* numerical constats can be very large. It is safe to assume that it is 512 bits long. 
* constants do checks at compile time for division on zero etc. So that you always get a usual number in at compile time.

* [...]int64{1,2,3} notation is nice for arrays that you don't want to count how many elements it has
* slice points to element in array, to size of elements and capacity. Very safe and space efficient structure.
* to append to slice use "x = append(x, y...)" where x,y are slices. "y..." is expansion of slice into list of arguments.
* use "func x(args... int)" for variable number of arguments
* often useful not to "pop" from slice, but just to reduce slice, and consider values beyond slice as garbage. Very convenience since no need to explicitly free memory.
* maps are not guaranteed to have order, sometimes go runtime changes order from run-to-run of same code
* in maps key should be comparable with "==", which is a lot of types, even functions!

* functions are first class in go, meaning they can be passed as arguments or assigned to variables
* closures are possible. closure is a function with access to the scope they have been defined in
* function variables can be compared by their pointers

* can use shorthand to defined multiple fields with same type in structs "type A struct {a,b int}"
* structs literals can have named fields form like "x := Point{X: 1, Y: 2, name: "asdf"}" for publicly exposed fields
* Promotion mechanism. Stucts can be embedded with like "type Window{Box, size int}" where Box is a type. This will result in creating public field Box and syntactic sugar to refer to methods of Box field, so no need to chain methods of fields.

* type error = string
* error interface is single function "Error() string"
* if you have logic to check if error is specific type of error by using type assertion (like EOD error), you need to do that immediately up the call stack. This is because "fmt.Errorf" is used often and it takes error and makes a string from it, which discards previous concrete type and you would not be able to do type assertions anymore. TODO: what if you still want to? 

* methods have resolution mechanism. If receiver is pointer, it can be used with variable. If receiver is variable, it can be used with pointer. And of course it can be used with same receiver type it was defined with

* bitset implemented with numbers is nice, since you can have fast union and other bitwise logic.
* interface containing a nil pointer is not nil. So if you have pointer to interface, then it will never be nil. This is because interface internally contains interface value and interface type, and interface type is not nil always.

* there is a sort package that can be used to sort with arbitrary compare function. To use it need to define a function that returns new type that implements sortable interface (Len(),  Less(i,j), Swap(i,j)).
* there is type-switch notation "switch x.(type)" or "switch x := x.(type)" if you want to use x with that time in switch scope

* there is main goroutine that starts with main program
* goroutine has variable stack associated, OS threads have 4MB, goroutine can have as low as 4KB and as much as 1GB, which allows for very large number of goroutines and very deep recursive calls

* channels are useful when they are buffered and no single goroutine drain whole channel all the time
* mnemonics for channels "<-chan" read channel, and read from channel. "chan<-" write channel, write to channel
* channel made without size is unbuffered, it will block immediately whoever reads or writes to it
* leaking goroutines is goroutines are trying to write to channel that nobody is reading from. This is not checked by runtime. Need to close such channels with WaitGroup for example so that those goroutines will panic on write to or stop reading from. 
* binary semaphore is useful for running single goroutine "make(chan bool, 1)"
* counting semaphore is useful for limiting number of goroutines (so we don't exhaust OS resources): "make(chan bool, 20)"
* use sync.WaitGroup to synchronize that all channels are finished
* use sync.Once for one time initialization in concurrent environment, also called Lazy Initialization
* use sync.Mutes Lock and Unlock
* use sync.RWMutes if you have exlusive lock (for read and write) and non-exclusive read only ownership.
* Memory Synchronization. It is important to use locks correctly to help runtime and OS optimize CPU level caching for concurrent execution. Mutexes can signal to OS to flush the cache.
* use race detector tool

* go has vendor specific config for packages, so you can remap where to get packages from
* can run godoc server locally
* if package contains "internal" name it can not be imported from more than current level of packages (different parent)
* circular dependencies are not allowed
* in tests to avoid circular dependencies, make new module that is imported in both modules. For tests "backdoor" private module level variables into export single test file, this file often called "export_test.go"

* go has tests (Test...), benchmarks (Benchmark...) and examples (Example) for testing
* elaborate testing frameworks are discouraged "they feel like foreign language"
* define tests very simple first, it is bad to have abstract test utilities
* white-box testing — test that internals of package are correct, not just interface.
* avoid brittle tests —tests that change often with changes in code, try to make tests such that they remain even if code changes (e.g. match substring of error message)
* go command line has tool for coverage reporting and visualization
* go benchmark is easy to use and define
* for benchmarking use different sizes of input and algorithms to see their behavior and comparative advantage
* go command line also has profiling. Profiling shows which functions use most CPU, memory, or blocking 
* examples in testing are used in godoc to show how code is used. This is nice since it is showcases how to run it and go compiler checks that code is correct. If it has "Output" comment in body, then go test will check that it produces same output to std out.

* reflection, powerful, but easy to make mistakes. better stay away from it
* unsafe allows access OS memory address. However, addresses can change during runtime, Go spec does not guarantee their stability.
* layout is not guaranteed by Go spec
* unsafe.Sizeof may be useful to check size of structs
* calling C code looks easy in Go. Calling go code in C also possible. Check cgo module.


Let's look at serious scientific and high performance production grade software

UPD: 2020-06-08

  • SpaceX software in rockets is in C++
  • Tesla is likely to ruin C++ and Go (based on their github)

C++ is leading by a large margin. Its ecosystem is astonishing. Pretty much everything of significance has been written and maintained in C++. Bare in mind, other languages like Golang or Rust have been around for around 10 years, still main applications did not migrate to them. Even more, TensorFlow itself is not in Golang, even though they are both from Google and Tensorflow created 6 years later. It is true that Golang is having momentum in service development — things like networking, ORM, business logic — yet it is used in not much anything else.

Some languages was made by mistake (Javascript), some languages are too minimal (Golang), some are not scalable (Python). Maybe, then C++ has plenty of good besides performance? It has first class OOP, generics, operator overloading, standard library, gRPC and Apache Thirft, with RAII memory leaks are rare. Where I am going with this? If you are going to write high performance software likely it is going to be in C++.


It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.

-- T.Roosevelt, 1910


Your project is not good, manager is an asshole, stock is going down, back started to pain again. Finally, you got the award you always wanted, promotion, recognition. Problems, plans, achievements — it's all you. Everything is circling around you. This is your world and you are the in the center. From day 1 and until the end, you are the lead of this story.

Yet, your are small. Your world is a succession of days. With the rising sun, your day begins. You are having a breath of fresh air on your way to work, thinking what you will say to your buddies, then talking to them, getting through the day, finishing the job, thinking of a dinner on the way back home, and, at last, going to rest, until another day begins. In reality though, sun did not go up, it has always been there, fixed. It is your world that is a just a viewpoint from a tiny spot on a spinning rock circling through a vacuum of space around a giant nuclear reactor.

WYSIATI. You are a 24/7 observer of your story. It is all that matters. And not even in years past or future, but right now. Most pressing questions are whether you are hungry or happy, now. At best, you plan for the next season or a couple of years ahead. Rarely, your reflect on distant past or future. That is just you, but there are many others with their own stories. There have been even more before you and will be evermore after. What you do now can't change stories who lived in the past. Likewise, it will be irrelevant to generations far in future. Biggest evils and saints will go to an oblivion and their names will vanish in archives. What could happen will happen, with or without you. And what does is it matter to you anyways what someone in the past of future thinks of you? You are but a spec of dust.

This does not mean you should not do cool staff. Please, do! Rather, next time immediate worries overwhelm you, hold on for a moment and think about the bigger picture.

bye korea

Leaving Korea today. How will I remember it this time?

Quality service. Civilized people. Cheap housing. Clean streets. Late hours. Latest tech everywhere.

Things just work and damn well for the good of all the people. Things improve.

Korean is everywhere. Whatever is non-Korean is lame. Living with only English is possible, but missing-out on occasion.

Strong respect of experience. Private networks. New friends. Old friends. Giving back to the community.

Better sense of things, place, time, people, and myself.

This is not the first time I am back, and it will not be the last one. I will keep visiting and when stars align will stay here with no worry of leaving.


2019 books

This year

... discovered academic publishers — MIT, Princeton, Cambridge, Wiley — and very strong material: Normal Accidents, Streetlights and Shadows, Cornered, Causal Reasoning in Physics

... expanded on practical software engineering: The Data Warehouse Toolkit, The Clean Coder, Production Ready Microservices

... looked deep inside myself: The Ego Tunnel, The Manual, The Meditations, On Having No Head

... learned to write a bit better: Dreyers's English, Zen in the Art of Writing

... followed powerful stories: life of Wernher Von Braun in Dr.Space, willpower in White Devil's Daughters, horrors of Night, great leap of Rocket Men, cheerfulness of Jackie Chan, neuroscience in Into the Grey Zone

... got some good life advises: Never Split a Difference, Getting Things Done, Made to Stick, 12 Rules for Life, Range

... touched variety other interesting topics: What We Cannot Know, Brief Answers to Big Questions, Spying on Whales, The Art of Invisibility, Empty Planet, Novascene

... and just enjoyed some good entertainment: We are legion, Exhalation, What If, Soonish

43 books, longest 460p, average 270p.

Almost all paperback. Almost all gifted to charity, library@coex and friends.


I got this idea that it would be fun to make a couple of wild predictions and later to go back and see how far I was off. So here is the list,

10 years

The next big thing is going to be Brain Computer Interfaces (BCI). Data transfer speed will increase exponentially. Materials will get less invasive. Price will drop exponentially. Of course, at first there will be roadblocks, but once first great leap is reached — reading at 2x, memories download, skills upload — private and government capital will pour in. First trials will quickly finish in early 2020s. The race will be at full speed in 2027

Outer space is unbelievably rich in commodities. Yet, keeping human-friendly conditions out there would be even more expensive. All operations, especially in deep space, will be fully unmanned. Software will support whole life-cycle — reaching remote places, extracting materials, building facilities, transporting them back, conducting scientific experiments. Technology is almost ready. Mars and asteroids are pretty far to reach, but moon is very close. First economically meaningful Moon base will operate by 2028.

All main languages — Python, C++, C#, Java, Javascript — will still be around. SQL and DBs that power it will gradually absorb best practices from all the variations that will continue to emerge every year or two. Perhaps, more focus will be on persisting and sharing computed (in-memory) state between processes to save time on restarts and deliver updates faster. With high network speed, there might not be a need for cold-storage on user devices. The big thing in computing will be doing global state very well or ways to operate equally well without it.

Social networks will be controlled by governments. Mandates will be imposed on what you search and how you talk. Access to mass spread of information or social organization will be tightly controlled by governments. USA will likely to protect individual privacy at least in some form, meanwhile places like China will require complete transparency to government even in private matters. On the bright side, similarly to what Instagram did to unlock ephemeral social value, new kinds of apps will emerge. Instagram itself will be replaced by its successor or transformed beyond recognition. Emphasis will be on real physical connection and local communities and businesses. In the end of the decade, VR and BCI will be the new place for social interaction. Hopefully, email will still be around in 2030.

People will continue to travel a lot and keep learning from different places. This may not make public to act coherently, but will influence its choices, such as what accounts for good health care or not, what is good security or not, what is good transport or not. In shadows, public will evolve — one by one, individuals will get smarter and with that, their aggregate, public, will get smarter too. Large waves of migration could happen. Institutions will get stronger too. Starting at big-tech, wave of growth mindset will spread to every big organization, the ones who adopt it will prosper. Many governments and nations will go through self-reflection as well, hopefully leading to better life for everybody. Developed economies of US, Japan, China, Korea will grow primarily driven by dominant position in certain niche of high tech sector.

20 years

It is hard to envision what it would be, but it is possible that new kind of computing stack, from assembly up to the high level abstractions, would emerge, which is not based on Turing state machine. It will be faster and more robust for distributed computations that heavily use network. Lowest level software stack will be powered by a system only remotely resembling assembly.

At the end of 2040s, AGI is achieved, but singularity does not happen. There will be emergence of fully digital cognition that is similar to humans, but its exploding self-improvement will not happen for quite a while due some fundamental physical or mathematical challenges we can not fathom today. It is also possible that this already happened in late 2010s, but we were not aware of it. You will talk to new digital species over your computer in 2038.

In most exotic form, BCI, VR, AGI, industrial automation and space programs — all merge together into one symbiotic platform. But then, maybe, none of this will really happen. Time will tell.


Look around, do you know where you are? Maybe you are in your room, maybe you are in a cafe, in a car, or outdoors. You must be clearly convinced you are in 3D space filled with all sorts of 3D objects. This sense is fundamental to your reality. But how do you know all that?

All you see around is colorful 2D shapes. And yet, when you slightly move, these shapes start to deform and overlap. The closer you are the bigger the changes, the easier it is to spot them, the stronger is sense of spatial awareness. There further you are, the less objects change, the harder it is to judge their positions and shapes. At horizon everything becomes 2D. Then there are heuristics that make whole job a lot easier — gradients that mimic shadows, dark and bright colors, continuity of colors and shapes, rules of projections. Funny enough, they don't work all the time leading to abundance of illusions and creatures that exploit these illusions for survival. In the worst case, if you see a car far away — no parallax, small shapes, no gradients — there is no basis to believe it has 3D shape you believe it has. For all you know, you have seen cars before, which leads you to believe it has that particular kind of shape. Next time try it yourself to observe your thought process when guessing shape of object away, you would get just a recollection from memory. Maybe you wold even do some reasoning, if that thing really got your attention. If it is something unseen you would get absolutely no idea. This is basically Tesla Autopilot.

It is easy to think of evolutionary interpretation. Why do we need 3D? It allows more accurate predictions on how world changes. It would help dexterity at your tasks for food gathering, hunting, moving. Why we are so bad at 3D in distance? If we measure complexity of world by number of objects and number of their interactions — nodes and edges in world graph — then 3D world quickly outpaces 2D version with size. Besides, the most important things to you are probably nearby — food, danger, friends. If something is far, then it is pretty safe to get minimal understanding about it until you get closer to it — or it will get closer to you! — only then you may think how it fits into your 3D world. Simple queries for objects far away such as "which direction it moves?", "what it's size?", "is it a single thing?", "what kind of a thing?" all work just fine with fast and memory efficient 2D memory and reasoning. Why do we need heuristics then? Even for small sizes, 3D processing may requite a lot of effort. Meanwhile, due to short distances it has to be fast and reliable. Thus, it is a good idea to have a bunch fast loosely coupled heuristics. It is also interesting how other creatures developed their perception of space due to specifics of their environment. Worlds of bugs in dynamic micro-scale, large creatures in rain-forest, whales in wide oceans, and birds spending fair portion of their time looking to flat surface with all sorts of things — all must be very different from ours. Just as interesting is how subjective perception of 3D changes with echolocation or sense of electromagnetic fields — is it similar to ours, fake 3D on top of 2D imagery, or is it a totally new sense like temperature?

In the end, it is just mind-blowing that 3D world you live in is nothing but a 2D image and a sense. This sense is not even absolute, but rather a continuum from very strong nearby to weak far away. All heavy lifting is done behind the scenes by evolutionary algorithm encoded in neural cortex. It is working hard to provide you with most accurate and fast representation based on all sorts of clues and extra signals. You don't even notice it. When it fails, you don't realize it. But most of the time it works spectacularly. You wholeheartedly believe it and start live in a world of its output even if it does not directly exist anywhere. Truly marvelous tech right here.