⊃ javascript and es6

notes from glenmaddern.com ran through the loopgifs demo from the video it was cool to use js in a much more structured way (and in a functional style) modules and some of the tools are both nice, though I still had weird node / npm install issues ..confusion I remember having years ago nice to be exposed to some of the new es6 features like arrow functions and classes also semicolons..I didn’t type a single one, whoa, what happened there?

⊃ optimizing in go

notes on optimization..

more ⪧

⊃ golang first bits

spurious notes on golang..

more ⪧

⊃ microservices and the monolith

part I lots of nice links to fowler started adding new features as microservices left the monolith in place microservices talked to their public API like any other app ..but eventually had to add an internal api part II broke the monolith by identifying ‘bounded contexts’ – “well-contained feature sets, highly cohesive, and not too coupled with the rest of the domain” and part III broke their team down to

more ⪧

⊃ microservices, first bits

on architecture “The job of good system architects is to create a structure whereby the components of the system – whether Use-cases, UI components, database components, or what have you – have no idea how they are deployed and how they communicate with the other components in the system.” on micro-architectures interesting note on synthetic / active monitoring: periodically simulate a customer using your service and measure the simulated actions ..but overall not very substantive on configuration drift either reapply configurations or burn down the servers (see fowler’s ‘phoenix server’ concept) this is from 2011, and I think it’s a pretty well-accepted idea now on the size of microservices one brain should be able to comprehend the whole thing may also be useful to divide along read/write boundaries so those can scale separately “The point, though, that each application within a business capability should have a comprehensible single purpose.

more ⪧

⊃ on apache kafka

Notes from this slideshare. very high throughput messaging producers write to brokers; consumers read from brokers; data stored in ‘topics’, topics are split into partitions which are replicated topics are ordered, immutable sequences of messages that are appended to each message in a partition receives a sequential, unique (per-partition) id called the ‘offset’ consumers track their pointers via (topic, partition, offset) tuples partitions are for consumer parallelism there are also

more ⪧

⊃ on how committees invent

from melconway.com “Given any design team organization, there is a class of design alternatives which cannot be effectively pursued by such an organization because the necessary communication paths do not exist.” “there is a very close relationship between the structure of a system and the structure of the organization which designed it” (in fact they’re identical) “It is an article of faith among experienced system designers that given any system design, someone someday will find a better one to do the same job.” (hah, no irony here either, I think) “To the extent that an organization is not completely flexible in its communication structure, that organization will stamp out an image of itself in every design it produces.” and then a perhaps cynical, but certainly interesting take on managing systems which concludes that communication in large teams disintegrate, and so, qed, large systems themselves disintegrate “Because the design which occurs first is almost never the best possible, the prevailing system concept may need to change.

more ⪧

⊃ outwards from the middle of the maze

https://www.youtube.com/watch?v=ggCffvKEJmQ developers used to have application-level guarantees via transactions (think action1, action2, commit) but in today’s ecosystem (mostly sans transactions), there are fundamental problems with latency, concurrency and partial failure that are hard to hide when will we get those guarantees back? and how? he mentions brewer’s previous keynote and brewer’s point that we should build simple reusable components that are intended to be combined; we can reason about these components (their latency, their failures) in a direct way; he also mentions people ten years ago thought this would be impossible – building libraries of reusable components for large scale systems so we have composition now, but we have to compose /guarantees/ two things make distributed systems hard: asynchrony and partial failure asynchrony could be handled in isolation by timestamping things and interleaving replies partial failures can be handled in isolation by providing replication in time or space (more nodes or replay messages) these fixes do not work with both problems together though..

more ⪧

⊃ service discovery

progrium.com there is this mesos project says DNS is insufficient due to the impracticalities of managing one’s own DNS and the inability to handle real-time changes in name resolution google used chubby in 06 as distributed lock and kv store, replacing DNS zookeeper is open source chubby; high availability and reliability in exchange for performance both use paxos consensus algorithm (see raft alternative) etcd is the new http-friendly alternative to zookeeper says service discovery should have a consistent, ha directory + registration + health monitoring + lookup and connection features in another post, talks about consul.io, a “powerful tool” with monitoring, config store, DNS..maybe our tools should be less powerful and more puny.

more ⪧

⊃ shapefiles

some basics on shapefiles: wikipedia has a nice intro .shp, .shx and .dbf files are required – they’re binary files with all of the vector data, indexing info and feature notes other optional files may be present too, like .prj (projection info) everything’s sequential – first record in the .shp corresponds to the first record in the .dbf fiona is a nice python package for reading shapefile info – it

more ⪧
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23