⊃ what happens when..

..you run a program from the shell? background, from IITK: the shell wraps the kernel, and the kernel determines what other processes can run and it also mediates access to the hardware any such hardware access / general io must be made through system calls the shell: when focused, the kernel echoes keystrokes to the screen when Enter is pressed, the line is passed to the shell, and the shell will attempt to interpret this input as a command the shell figures out you want to run /bin/ls (or whatever) and it makes a system call to start /bin/ls as a child process (forking) and give it access to the screen and keyboard through the kernel this forking results in a copy of the environment from the parent process to the child then the shell sleeps, waiting for that command to finish..

more ⪧

⊃ os

Some notes on the Think OS book from Green Tea Press.

more ⪧

⊃ rsa

At work we’ve been setting up tinc networks alongside OpenVPN. I wanted to use the public key embedded in the OpenVPN crt with tinc, so I started trying to parse the crt and use pyasn1 to create a public RSA key in a tinc-friendly format. I was learning about the exponent and modulus in RSA when I eventually realized the public key could be generated from the private key :| but in the meantime here are some notes from reading about RSA.

more ⪧

⊃ website thinkin

..I’d hate for this sidebar site to be my main thing – it’d be fun to be more creative

more ⪧

⊃ copying in tmux

copying in tmux ..couldn’t find any good way to do it with the keyboard. lots of tutorials involving copy-pipe and xclip and vi-copy, none of them worked for me best solution I found was to use the mouse :| hold shift and then you can select text with the mouse ctrl-shift-c to copy

⊃ getting started with hugo

some notes on the static site builder, hugo..

more ⪧

⊃ microservices and the monolith

part I lots of nice links to fowler started adding new features as microservices left the monolith in place microservices talked to their public API like any other app ..but eventually had to add an internal api part II broke the monolith by identifying ‘bounded contexts’ – “well-contained feature sets, highly cohesive, and not too coupled with the rest of the domain” and part III broke their team down to

more ⪧

⊃ microservices, first bits

on architecture “The job of good system architects is to create a structure whereby the components of the system – whether Use-cases, UI components, database components, or what have you – have no idea how they are deployed and how they communicate with the other components in the system.” on micro-architectures interesting note on synthetic / active monitoring: periodically simulate a customer using your service and measure the simulated actions ..but overall not very substantive on configuration drift either reapply configurations or burn down the servers (see fowler’s ‘phoenix server’ concept) this is from 2011, and I think it’s a pretty well-accepted idea now on the size of microservices one brain should be able to comprehend the whole thing may also be useful to divide along read/write boundaries so those can scale separately “The point, though, that each application within a business capability should have a comprehensible single purpose.

more ⪧

⊃ on apache kafka

Notes from this slideshare. very high throughput messaging producers write to brokers; consumers read from brokers; data stored in ‘topics’, topics are split into partitions which are replicated topics are ordered, immutable sequences of messages that are appended to each message in a partition receives a sequential, unique (per-partition) id called the ‘offset’ consumers track their pointers via (topic, partition, offset) tuples partitions are for consumer parallelism there are also

more ⪧

⊃ outwards from the middle of the maze

https://www.youtube.com/watch?v=ggCffvKEJmQ developers used to have application-level guarantees via transactions (think action1, action2, commit) but in today’s ecosystem (mostly sans transactions), there are fundamental problems with latency, concurrency and partial failure that are hard to hide when will we get those guarantees back? and how? he mentions brewer’s previous keynote and brewer’s point that we should build simple reusable components that are intended to be combined; we can reason about these components (their latency, their failures) in a direct way; he also mentions people ten years ago thought this would be impossible – building libraries of reusable components for large scale systems so we have composition now, but we have to compose /guarantees/ two things make distributed systems hard: asynchrony and partial failure asynchrony could be handled in isolation by timestamping things and interleaving replies partial failures can be handled in isolation by providing replication in time or space (more nodes or replay messages) these fixes do not work with both problems together though..

more ⪧
1 2 3