0018: last reflections, why start a new database conference, 2021 retrospective, imp schemaless db + crdt, office hours, internal inconsistency in the wild, rss feeds, salsa needs finite collections, tiddlywiki vs unigraph, multidimensional indexes, arrow, just don't fsync, testing distributeds systems, tigerbeetle perf demos, web3, explicit formal structure, zig doctests, rust arenas, semidirect products of crdts, single-program distributed systems, sqlite qpsg, valhalla, mundanity of excellence, to mmap or not to mmap, libgavran, relational e-matching

Published 2022-01-17

Writing:


Last year I only worked on the language side of imp. In the last few weeks I've been catching up the database side a little.

Hetorogenous types describes how the new type inference enables gradual typing on top of a schemaless relational database (and also gives imp modules, structs and tagged unions).

Not yet written up:


I'm experimenting with holding office hours every Friday afternoon. Schedule here.


In Internal consistency in streaming systems I used a pretty contrived example for testing. But in last weeks office hours I talked to a financial dev who had observed internal consistency violations in production. He gave the example of calculating the value of a portfolio of oil price spreads - an aggregate over a self-join. Whenever new prices arrived in the system traders would see the value of their portfolios fluctuate wildly for a few seconds. It sounds like they currently solve this with adhoc buffering but are interested in how to design a system that guarantees internal consistency.


A list of rss feeds I'm subscribed to - https://gist.github.com/jamii/15027b87abb36785bdb05d75422ed507.


I've been thinking about the differences between salsa and differential dataflow.

Salsa is a much simpler system, which is appealing for smaller datasets. But I don't think it's a good target for sql/datalog/imp as it stands because it doesn't support the concept of a finite collection that can be iterated over, except by reducing the entire collection to a single key. This makes efficient incremental joins difficult to implement.

Whereas differential dataflow doesn't support the concept of a lazy infinite collection, except by explicitly transforming the code into a dataflow over a finite demand set.

This seems similar to the tradeoffs between push-based and pull-based systems, but I can't convince myself that it's an inherent tradeoff.

Noria for example tries to blend both finite enumerable collections and lazy infinite collections (parameterized sql queries). It gives up internal consistency for the sake of performance, but could that be avoided?

Is there a way to keep the simplicity and laziness of salsa but still explicitly support finite collections?


I've always been fascinated by TiddlyWiki. It's a bizaare bootstrapped writing and programming environment that can save itself into a self-contained html file, supporting a huge community of mostly not-professional-programmers.

Grok TiddlyWiki is a complete guide to TiddlyWiki. The guide itself is built in TiddlyWiki and features programming exercises which can be completed in the inline editing tools and a spaced-repetition system which saves your progress into your local copy of the wiki.

But as interesting as TiddlyWiki is, it's hard to sync changes between different wiki and the full-refresh-on-every-change model often suffers from performance bankruptcy.

There's something of a gold-rush on notetaking systems at the moment, but so far the only one I've seen that is as malleable as TiddlyWiki is Unigraph. It builds on top of Dgraph, uses GraphQL as a query language and supports sync between databases. Enough to replace TiddlyWiki?


Learning multi-dimensional indexes

The earlier papers on learned indexes seemed kind of silly. But the state of the art for multi-dimensional indexes is... not great, especially as the number of dimensions grows. It seems totally plausible that many high-dimensional datasets have local low-dimensional approximations.

Way back in Eve days I was fascinated by the idea of not having to pick a variable ordering for joins and instead picking the most selective variable at each level of the search. The main obstacle to this was that multi-dimensional indexes were just not fast enough. Maybe it's worth revisiting those ideas.


https://www.youtube.com/watch?v=YhF8YR0OEFk

A recent update on what's going on in Arrow. The various projects working on embeddable arrow-native execution engines (eg gandiva) seem promising.

Also mentions substrait which aims to provide a common standard for sharing query plans between systems.


https://glaubercosta-11125.medium.com/winds-of-change-in-web-data-728187331f53?source=rss-4b282c154795------2

There are two common models for database persistence:

This article offhand mentions a third - multi-node, on-disk, but just never call fsync. This can get within an order of magnitude of the throughput and latency of in-memory databases but can handle much larger datasets.


https://asatarin.github.io/testing-distributed-systems/

Curated list of resources on testing distributed systems


https://github.com/coilhq/tigerbeetle#performance-demos

Explains design decisions in TigerBeetle via a series of reproducible performance experiments. This kind of design rationale is usually completely impossible to recover from the history of most systems, so it's great to see it embedded directly in the repo.


I haven't paid much attention to blockchain stuff because usually both sides of the argument are vague and unsubstantiated. But these seem pretty detailed.

https://www.usenix.org/publications/loginonline/web3-fraud

...the Ethereum 'world computer' has roughly 1/5,000 of the compute power of a Raspberry Pi 4,

...the cost of writing a single 3kB message to the Ethereum blockchain is the same price as a year of storage on Amazon for the entire 1 TB Ethereum blockchain. Or the same price as buying a 1 TB M.2 SSD.

https://moxie.org/2022/01/07/web3-first-impressions.html

...there's no hash commitment for the data located at the URL. ...Anyone with access to that machine, anyone who buys that domain name in the future, or anyone who compromises that machine can change the image, title, description, etc for the NFT to whatever they'd like at any time (regardless of whether or not they 'own' the token).

It doesn't functionally matter that my NFT is indelibly on the blockchain somewhere, because the wallet (and increasingly everything else in the ecosystem) is just using the OpenSea API to display NFTs, which began returning 304 No Content for the query of NFTs owned by my address!


https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.3597&rep=rep1&type=pdf

...the cause of a number of unexpected difficulties in human-computer interaction lies in users' unwillingness or inability to make structure, content, or procedures explicit.

Many times he struggled to create a title for his note; he often claimed that the most difficult aspect of this task was thinking of good title.

...instead of building large interconnected networks of nodes (like the designers expected), users created linkless spaces of nodes arranged in regular graphical patterns that indicated relationships among nodes spatially and visually.

Over time, users built up structured templates similar to form-based interfaces, thus reducing the overhead involved in adding structure to the information they were entering.

We believe that experts cannot reliably give an account of their expertise: We have to exercise their expertise on real problems to extract and model their knowledge.

While moving the interface closer to the user's intentions may make it difficult to realize some intentions, changing the user's conception of the domain may prevent some intentions from arising at all. So while a well designed special purpose language may give the user a powerful way of thinking about the domain, it may also restrict the user's flexibility to think about the domain in different way

...formalizing information requires one to commit to an explicit structure for the information. ...Since a user's understanding of any non-trivial task, such as performing an analysis or completing a design, evolves as they attempt to complete the task, users resist making such commitments.

[On piles of paperwork] You don't want to put it away because that way you'll never come across it again. ...it's almost like leaving them out means I don't have to characterize them.

Most of these users know how to create subdirectories or folders to organize their files but postpone classification until they 'have more time' or 'the mess gets too bad.'

...the characteristics of the research being performed influenced the organization and communication of the research groups. A system which attempts to impose a particular structure on communication will likely not match any given group's actual communication structure.

Providing mechanisms for information to be entered in a less formal representation and then be incrementally formalized and structured is thus a fundamental way system designers can support intellectual activities with computational tool

...even if the system's inferences are incorrect at times, as long as they are right part of the time and it is apparent to the user when the system has made the wrong inference, features based on automated recognition of implicit structure will cost the user little for the benefit they provide. ...Because the effect of the inference [...] is transient the effects of an incorrect inference do not have an impact on later use of the system.

An old paper but still relevant.

Related to the 'search > organizing' argument.

Also an argument against the widespread success of the semantic web. Designing data models is really hard and most people usually aren't interesting in paying the upfront cost in return for a vague future reward.


The proposed design for documentation tests in zig is neat.


https://blog.reverberate.org/2021/12/19/arenas-and-rust.html

The part in the middle starting with 'A Rust struct exposed as a Python class through #[pyclass] cannot have type parameters' perfectly captures that sinking feeling I occasionally get when working in rust and realizing that my life is about to get very difficult.


Composing and Decomposing Op-Based CRDTs with Semidirect Products

I'm less excited by this paper than I thought I would be.

Basically, they define a stronger ordering on events that extends the causal order. Whenever an event arrives out of order, they figure out how to transform the new event based on all the previously applied events which should have been applied after the new event, and then apply this transformed new event to the state.

An equivalent approach would be to incrementally recompute the state from the point at which the new event needs to be inserted. This replaces the problem of figuring out to transform events (which needs to be done for every pair of event types) with the problem of incremental computation (which we already know how to do pretty well for the majority of programs). This is how eg matrix works, which gives me confidence that we can scale it to non-trivial crdts.


No more DSLs: Implement and deploy a distributed system with a single program

If you want to write a distributed system, then instead of writing a thousand different programs and configuration files in many different DSLs, you can use an approach I call "single-program systems", and write just one program.

[This] allows you to write a program which manages every aspect of the distributed system above the hardware level, such that running the program deploys the distributed system, and the program and the distributed system can really be said to be the same thing.

One of the frustrations I have with container-based systems is that it's often difficult or impossible to run them outside docker (due to depending on custom hostnames, network bridges, startup scripts hidden inside docker config files etc). But then debugging and profiling tools are typically not included in the container, and are hard to configure to reach across the sandbox.

The idea in this article is to move all of this orchestration configuration into the program itself, so that I can just add a switch to deploy everything locally, or use regular code to express different configurations.

Also notable - The Unix process API is unreliable and unsafe.


https://www.sqlite.org/queryplanner-ng.html#_the_sqlite_query_planner_stability_guarantee

The Query Planner Stability Guarantee means that if all of your queries run efficiently during testing, and if your application does not change the schema, then SQLite will not suddenly decide to start using a different query plan, possibly causing a performance problem, after your application is released to users. If your application works in the lab, it will continue working the same way after deployment.


State of Valhalla

Java is (maybe) getting something much like Julia's immutable stack-allocatable types. The backwards compatibility issues are intimidating, but apparently on the way to being solved.


http://rittersp.com/wp-content/uploads/2014/03/Chambliss-Mundanity-of-Excellence.pdf

The foregoing analysis suggest that we have overlooked a fundamental fact about Olympic-class athletes... Excellence is mundane. Excellence is accomplished through the doing of actions, ordinary in themselves, performed consistently and carefully, habitualized, compounded together, added up over time. ...the differences are neither unmanageable nor, taken one step at a time, terribly difficult.

Similar argument to Humans are not automatically strategic - the only trick to performing well is to consistently do all the obvious things, but humans are not good at that.

I have enough abstract reasoning ability to understand that I'm safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn't lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior.

...systematically training one's motivational systems in this way is also not automatic for us.


Are you sure you want to use mmap

RE: Are you sure you want to use mmap

I have no idea which of the various points presented here are correct.


libgavran

A storage engine and a textbook, developed in tandem, by the author of 'RE: Are you sure...' above.


Relational E-matching

Compared to a conventional baseline, relational e-matching is simpler to implement and orders of magnitude faster in practice.

If all you have is a hammer, better hope it's a worst-case optimal join hammer because I'll be damned if there aren't a lot of matching nails hanging around in the most surprising subjects.