<

The Linkspace Guide

This document is kept up to date on a best effort basis.
Sometimes the Rust linkspace docs is ahead of this guide.

This is a technical document describing the linkspace packet format, and software library to create, query, and process packets.

lk --version
linkspace-cli linkspace-cli - 0.5.0 - main - 624720a - 1.75.0-nightly

Introduction

Linkspace is a open-source library and protocol to build event-driven applications using a distributed log as a source of truth. The linkspace packet format is designed to be extremely fast to read, write, and interpret. Depending on your use case, you can limit yourself to just those functions and write your event streams and network from scratch. The library has various tools to make developing increasingly complex systems much easier.

The library is structured around 4 concepts.

  • Point - Plain bytes in any format, optionally with a spacename and links to other points.
  • ABE - Language agnostic (byte) templating for convenience.
  • Query - A list of predicates and options for selecting packets.
  • Runtime - A runtime around a multi-reader single-writer database for saving and querying.

With that foundation, common challenges are addressed by a set of Conventions.

Setup

This guide uses Python and (Bash) CLI snippets.

Binary

The download contains the CLI and python library and the examples used in this document.

Package manager

pip install linkspace

cargo +nightly install linkspace-cli --git https://github.com/AntonSol919/linkspace

Build from source

git clone https://github.com/AntonSol919/linkspace

for users

make install-lk install-python

for development / debug builds

source ./activate builds and sets PATH and PYTHONPATH env variables.

API overview

The linkspace API is the Packet type and a small set of functions. It is available as a Rust crate linkspace and binding for other languages follow the same API.

It consists of the following:

Point

Rust docs

Points are the basic units in linkspace. They carry data, link to other points, and might contain information about the who, what, when, and how. There are 3 kinds of points. datapoints, linkpoints, and keypoints. A point has a maximum size of 216-512 bytes.

The library exposes fields as properties.

The functions automatically generate the hash and prepend a netheader when you build a point. The result is a packet. For simplicity sake, all functions in the library deal only with packets.

lk_datapoint

echo "Hello, Sol!" | lk data | lk pktf "[hash:str]\n[data]"
Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
Hello, Sol!
from linkspace import *
datap = lk_datapoint(b"Hello, Sol!\n")
print(f'Packed data {datap.data.decode()} into a packet with hash {b64(datap.hash)}')
print(lk_eval2str("Or use abe, a language agnostic template. Eg. [hash:str] = [data]", datap))
Packed data Hello, Sol!
 into a packet with hash Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
Or use abe, a language agnostic template. Eg. Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk = Hello, Sol!
// import init,* as linkspace from "/pkg/latest/linkspace.js"; -- already imported 
let datap = lk_datapoint("Hello, Sol!");
log( datap.data );
log(new TextDecoder().decode(datap.data))
log(lk_eval2str("Or use abe, a language agnostic template.\\n Eg. [hash:str] = [data]",datap));

lk_linkpoint

A linkpoint can: hold data, hold links to other points, and can be found with by its spacename.

It consists of these fields:

Fieldsize  
Group32 the intended recipients.
Domain16 the intended application.
Spacenamevar<240 Sequence of bytes. e.g. '/dir1/dir2/thing'
Stamp8 Big endian UNIX timestamp in microseconds.
Links48*n A variable length list of (Tag16, Pointer32)
-Link[0]48  
-Link[1]48  
-Link[2]48  
-Link[…]48  
Datavar<216  

Each (Domain,Group) is a 'tree', and each (Domain,Group,Spacename) is a points 'location'.

All values, including the Spacename, contain arbitrary bytes.

An entire point can be 216-512 bytes in size. The header is always 4 bytes. A data point can hold a maximum of 216-512-4 bytes. This space is shared between the links and data so beware that too much data and links won't fit into a single packet To overcome this, create multiple points and link them in a new linkpoint. These limits ensure there is only a single level of fragmentation to deal with.

Point hashes, GroupID's, and public keys are 32 bytes.
They are usually encoded in URL-safe no-padding base64, e.g. RD3ltOheG4CrBurUMntnhZ8PtZ6yAYF.
Such a string makes things unreadable.
The [...] syntax (ABE) allows you to name and manipulate bytes.
This following example shows that [#:pub] resolves to the bytes RD3ltOheG4CrBurUMntnhZ8PtZ6yAYF in both the Group and the second link
Furthermore, If no group was provided it defaults to [#:pub]

Datapoints do not have a 'create' field, so they get the same hash given the same data. If we had forced a specific 'create' stamp for both the python and bash example it would have produced the same hash for both. By default 'create' is set to the current time ( microseconds since epoch ) and thus the hashes are different.

The command lk link builds one or more linkpoint packets and output's it to stdout by default. Whenever a cli commands deal with (domain, group, spacename) tuples, they are set by the first argument: DOMAIN:GROUP:SPACENAME. Here two links are added with the tags first_tag_1 and another_tag.

lk link "a_domain:[#:pub]:/dir1/dir2/thing" -- \
          first_tag_1:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk \
          another_tag:[#:pub] \
| lk pktf

type	LinkPoint
hash	jmZnIekOx4ag4HMA0PrBR4rMWiv2VMtdKFoOCFwzCHA
group	[#:pub]
domain	a_domain
space	/dir1/dir2/thing
pubkey	[@:none]
create	1699121012184944
links	2
	first_tag_1 Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
	another_tag Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk

data	0
echo hello | lk link my_domain --data-stdin | lk pktf
type	LinkPoint
hash	aon0tcUH3tCBPPHxh_lcEpUpVl-LDcV7fZV8s8kI69I
group	[#:pub]
domain	my_domain
space	
pubkey	[@:none]
create	1699121012203659
links	0

data	6
hello

The API deals with arbitrary bytes, not encoded strings. Some example python code that returns a value of type 'bytes' are:

  • "some string".encode()"
  • the b"byte notation"
  • fields like apkt.group, apkt.hash apkt.domain etc
  • evaluate an ABE string with lk_eval.
ptr1 = lk_eval("[b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]")
link1 = Link(tag=b"first tag 1",ptr=ptr1)

ptr2 = lk_eval("[#:pub]")
link2 = Link(b"another tag",ptr2)

assert(link1.ptr == link2.ptr)

datap = lk_datapoint(b"Hello example");
link3 = Link(b"a datapoint",datap.hash)

linkp = lk_linkpoint(
    domain=b"example-domain",
    group=lk_eval("[#:pub]"),
    data=b"Hello, World!",
    links=[link1,link2,link3]
)
str(linkp)
type	LinkPoint
hash	n3Ojv-LgvqsrTlIyc2Eho3u4DFdUua7Gt720u2cim0M
group	[#:pub]
domain	example-domain
space	
pubkey	[@:none]
create	1699121012306179
links	3
	first tag 1 Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
	another tag Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
	a datapoint eVo9H3i6aoDZOT7D3FJfG5foxDXtf7f16vneRXeGLIo

data	13
Hello, World!

lk_keypoint

A key point is a linkpoint with an additional publickey and signature.

There are functions to generate, encrypt, and decrypt a linkspace key. Leaving you to deal with the saving. Alternatively there is the lk_key function that does it all for you. With the added benefit that you can address your own public key as [@:me:local].

export LK_DIR=/tmp/linkspace
lk --init key --decrypt-cost 0 --password "my secret" # remove the --decrypt-cost. it speeds up building this doc
$argon2d$v=19$m=8,t=1,p=1$Z8eP6T6nQMHRmh1zRDjAlkOXe8+v4dqeT8X0NSRXsBo$1dIQQiZMCWWp63ZTD1FZj1AFURkpHRd/DHYTYnUUY7E
Z8eP6T6nQMHRmh1zRDjAlkOXe8-v4dqeT8X0NSRXsBo
lk keypoint "example::" --password "my secret" | lk pktf
type	KeyPoint
hash	zluzgSZonFpI6cOKIDAT0a0heY3WD9B-Xhov5RIRjIw
group	[#:pub]
domain	example
space	
pubkey	[@:me:local]
create	1699121012356650
links	0

data	0

The CLI also accepts lk link --sign instead of lk keypoint

lk = lk_open("/tmp/linkspace",create=True)
key = lk_key(lk,b"my secret");
example_keypoint = lk_keypoint(key=key,domain=b"example")
str(example_keypoint)
type	KeyPoint
hash	Pvvc2MxDcL5wLM0yBVXaDfhCPX2W_3_-oRX8x7-7pRQ
group	[#:pub]
domain	example
space	
pubkey	[@:me:local]
create	1699121012451698
links	0

data	0

Fields

In python you can access these fields directly as bytes. Fields are not writable because they are included in the hash.

[attr for attr in dir(lk_linkpoint())  if not "__" in attr]
['comp0', 'comp1', 'comp2', 'comp3', 'comp4', 'comp5', 'comp6', 'comp7', 'comp_list', 'create', 'data', 'data_buffer', 'depth', 'domain', 'get_data_str', 'group', 'hash', 'hop', 'links', 'netflags', 'pkt_type', 'pubkey', 'recv', 'rooted_spacename', 'signature', 'size', 'spacename', 'stamp', 'ubits0', 'ubits1', 'ubits2', 'ubits3', 'until']

Where spacename[0..7] are the spacename components,

Some fields we've not seen so far are writable, but they are not relevant for most applications.

Notes

Groups signals the intended set of recipients. Domains signal the activity, and practically the application used to present an interface to the user.

The groups bytes can be chosen arbitrarily. Membership is enforced by its members. It's up to the user (or some management tool) to pick a method of data exchange.

The following do have a meaning. The [0;32] null group ( [#:0] ), i.e. the local only group, is never transmitted to other devices and is never accepted from outside sources. Everything in the [#:pub] group1 is meant for everybody. e.g. the public.

By convention the group created by pubkey1 XOR pubkey2 forms a group with those keys as its only two members.

The [#:...] is part of the LNS.
A public registry for assigning names and naming rights.
e.g. [#:sales:mycomp:com] for groups and [@:alicekey:mycomp:com] for keys.

lk_write and lk_read

The point is the content that is hashed, the packet is the a mutable network header, the hash, and the point.

datap = lk_datapoint("hello")
linkp = lk_linkpoint()
keyp = lk_keypoint(key)
packet_bytes = lk_write(datap) + lk_write(linkp) + lk_write(keyp)
print(len(packet_bytes),packet_bytes)

# read the bytes as packets
(p1, packet_bytes)= lk_read(packet_bytes)
(p2, packet_bytes)= lk_read(packet_bytes)
(p3, packet_bytes)= lk_read(packet_bytes)

assert(p1 == datap)
assert(p2 == linkp)
assert(p3 == keyp)
432 b"LK1\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x97b\xe5\x8e\xd1P/\xde\xa8\xc9\x15c\xc4\xce\x00Q\xaf\xf4\x1f\x8f@oc1\xad\x86&3f\x03\x01\x00\x01\x00\thello\xff\xff\xff\xff\xff\xff\xffLK1\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xb94\xfd\xb6U\xe9\x85\x90\x80/\xd4'\nz\x1bD\xab\x95\xf4D[=Dh\xcf/>\xfc\x85\x91\xc7\x89\x00\x03\x00@\x00@\x00@\x00\x06\tWpc\xbfRb\xbb;\x8b=\xd5\xceu\xe1\xfa\x88/\xe1\xa3:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00LK1\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x87\xcf\xf2a}\xff\xa2d\xa9\xc9\xc0\t'\xf2\xf9\x05TS+2CM\xe32K\x88^\x14n\xb4\xe5]\x00\x07\x00\xa0\x00@\x00@\x00\x06\tWpc\xbf[b\xbb;\x8b=\xd5\xceu\xe1\xfa\x88/\xe1\xa3:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00g\xc7\x8f\xe9>\xa7@\xc1\xd1\x9a\x1dsD8\xc0\x96C\x97{\xcf\xaf\xe1\xda\x9eO\xc5\xf45$W\xb0\x1a\xb7\xf31[\x01\xd7\xa0\xc3\xd1ka\xf1/+\xa1\xb5\x14l\x9d\xf9Jy\xec\xbe<\x82\xccp\x83\x1d\x10\xdb\x99\x92B4\xca\x13\x9a\xab\x8a\xb3\xb0\xf4\xbc\xdcH\xb33\xcc\x9b\x17\xa9\xb6\xe1v\xbbk$\xcf\x8eY\xfes"

The CLI automatically reads and writes in packet format from the relevant pipes.

echo datapoint:
echo -n hello | lk data | tee /tmp/pkts | xxd 
echo linkpoint:
echo -n hello | lk link my_domain:[#:pub]:/hello/world -- link1:[#:0] link2:[#:test] | tee -a /tmp/pkts | xxd 
echo keypoint:
echo -n hello | lk keypoint --password "my secret" my_domain:[#:pub]:/hello/world -- link1:[#:0] link2:[#:test] | tee -a /tmp/pkts | xxd 
cat /tmp/pkts | lk pktf [hash:str]
datapoint:
00000000: 4c4b 3100 0000 0000 ffff ffff ffff ffff  LK1.............
00000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 0f97 62e5 8ed1 502f dea8 c915 63c4 ce00  ..b...P/....c...
00000030: 51af f41f 8f40 6f63 31ad 8626 3366 0301  Q....@oc1..&3f..
00000040: 0001 0009 6865 6c6c 6fff ffff ffff ffff  ....hello.......
linkpoint:
00000000: 4c4b 3100 0000 0000 ffff ffff ffff ffff  LK1.............
00000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000020: dd87 bbd6 bfe3 3fde 9c85 50cc 97f2 aec1  ......?...P.....
00000030: 3c8e efb2 0bcd 6a0e 19e3 0a90 6efe aa58  <.....j.....n..X
00000040: 0003 00b4 00a0 00b4 0006 0957 7064 591e  ...........WpdY.
00000050: 62bb 3b8b 3dd5 ce75 e1fa 882f e1a3 3ad9  b.;.=..u.../..:.
00000060: 598c 3715 c589 3e0f db8b 487d 5cfd b139  Y.7...>...H}\..9
00000070: 0000 0000 0000 006d 795f 646f 6d61 696e  .......my_domain
00000080: 0000 0000 0000 0000 0000 006c 696e 6b31  ...........link1
00000090: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000b0: 0000 0000 0000 0000 0000 006c 696e 6b32  ...........link2
000000c0: 2d1b f1bc 0f64 246e 3485 65a5 5b0a 9198  -....d$n4.e.[...
000000d0: b02b 33ce dc82 bfb0 56c2 4741 85b4 367c  .+3.....V.GA..6|
000000e0: 0206 0c0c 0c0c 0c0c 0568 656c 6c6f 0577  .........hello.w
000000f0: 6f72 6c64 ffff ffff                      orld....
keypoint:
00000000: 4c4b 3100 0000 0000 ffff ffff ffff ffff  LK1.............
00000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 847e 58af 6b34 bbcf 4f7d 8019 4910 5d1b  .~X.k4..O}..I.].
00000030: 11f3 4c0e 335a 958e 3d27 79aa b934 9111  ..L.3Z..='y..4..
00000040: 0007 0114 00a0 00b4 0006 0957 7064 95b3  ...........Wpd..
00000050: 62bb 3b8b 3dd5 ce75 e1fa 882f e1a3 3ad9  b.;.=..u.../..:.
00000060: 598c 3715 c589 3e0f db8b 487d 5cfd b139  Y.7...>...H}\..9
00000070: 0000 0000 0000 006d 795f 646f 6d61 696e  .......my_domain
00000080: 0000 0000 0000 0000 0000 006c 696e 6b31  ...........link1
00000090: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000b0: 0000 0000 0000 0000 0000 006c 696e 6b32  ...........link2
000000c0: 2d1b f1bc 0f64 246e 3485 65a5 5b0a 9198  -....d$n4.e.[...
000000d0: b02b 33ce dc82 bfb0 56c2 4741 85b4 367c  .+3.....V.GA..6|
000000e0: 0206 0c0c 0c0c 0c0c 0568 656c 6c6f 0577  .........hello.w
000000f0: 6f72 6c64 ffff ffff 67c7 8fe9 3ea7 40c1  orld....g...>.@.
00000100: d19a 1d73 4438 c096 4397 7bcf afe1 da9e  ...sD8..C.{.....
00000110: 4fc5 f435 2457 b01a c087 1bdd de6c e59e  O..5$W.......l..
00000120: 53e5 8d77 76fa d6b8 f36b f707 d97a fab3  S..wv....k...z..
00000130: 281d eb6f 03eb 639d 5b16 3bae 8483 a4be  (..o..c.[.;.....
00000140: 84cc ec19 0b44 cfa4 d21a 4a3e 6345 ee59  .....D....J>cE.Y
00000150: 75dd 01ba 8ee4 5504                      u.....U.
D5di5Y7RUC_eqMkVY8TOAFGv9B-PQG9jMa2GJjNmAwE
3Ye71r_jP96chVDMl_KuwTyO77ILzWoOGeMKkG7-qlg
hH5Yr2s0u89PfYAZSRBdGxHzTA4zWpWOPSd5qrk0kRE

Linkspace can be used in two general ways. A classic client/server where you control the entire network. Or in full distributed mode where each user manages their own runtime. In the latter, an application does not have to deal with IO sockets directly.

ABE

Rust docs

ABE (Ascii-Byte-Expr) is a tiny language-agnostic byte templating engine. Its core structure is a stringly representation of delimited bytes ( of the type: [ ( [u8], delimiter ) ] ). Its primary purpose is to make it easy (for developers) to read and write sequences of bytes (0..=255) in plain ascii, including the null (0) byte. In addition, it supports evaluation of functions that act as shorthand for long sequences of bytes.

Linkspace packets have no concept of encoding formats. All fields are fixed length or prefix their exact length.

ABE is used for things like Query, printing, and in most arguments for the cli.

ABE is not meant to be a programming language! It's primarily meant to read and write arbitrary bytes in some context and quickly beat them into a desired shape. Some things are limited by design. If there is no obvious way to do something use a general purpose language to deal with your use-case.

When building an application you can choose where to use ABE and when to use a different encoding.

Basic Encoding

  • Most printable ascii letters are as is.
  • Newline is an external delimiters.
  • : and / are internal delimiters. Separating two byte expressions.
  • [ and ] wrap an expression
  • :, /, \, [, ] can be escaped with a \.
  • \x00 up-to \xFF for bytes.
  • \0 equals \x00, \f equals \xFF

We can encode binary into valid abtxt as follows:

We'll get back to encode more in depth later.

printf "hello" | lk encode -i
printf "world/" | lk encode -i
printf "nl \n" | lk encode -i
printf "open [ close ]" | lk encode -i
printf "emoji ⌨" | lk encode -i

All fields are arbitrary bytes, and lk_encode can print them as ab text

multiline = """newline
tab	""".encode() # encode implies utf-8

lkp = lk_linkpoint(spacename=[b"hello",b"world/",multiline,b"open [ close ]"])

print(lk_encode(lkp.comp0),"\t",list(lkp.comp0))
print(lk_encode(lkp.comp1),"\t",list(lkp.comp1))
print(lk_encode(lkp.comp2),"\t",list(lkp.comp2))
print(lk_encode(lkp.comp3),"\t",list(lkp.comp3))
print(lk_encode(lkp.comp4),"\t",list(lkp.comp4), lkp.comp4.decode("utf-8"))
hello 	 [104, 101, 108, 108, 111]
world\/ 	 [119, 111, 114, 108, 100, 47]
newline\ntab\t 	 [110, 101, 119, 108, 105, 110, 101, 10, 116, 97, 98, 9]
open \[ close \] 	 [111, 112, 101, 110, 32, 91, 32, 99, 108, 111, 115, 101, 32, 93]
 	 []

lk_eval

ABE is evaluated by substituting an expressions ( [..] ) with its result. For example in [u8:97], the function 'u8' is called with the arguments ["97"]. The function 'u8' reads the decimal string and writes it as a byte. The byte 97 equals the character 'a'. The byte 99 equals the byte 'c'

lk eval "ab[u8:99]" | xxd
00000000: 6162 63                                  abc
lk eval --json "h[u8:101]ll[u8:111] / world:etc" 
[[null,"hello "],["/"," world"],[":","etc"]]

Note that bytes are joined after evaluating. In the example this results in h + e + ll + o + ' ' => 'hello '. The meaning of the delimiters ('\n', ':', '/') are interpreted depending on the context. For instance, lk eval prints them 'as is' for the outer expression.

The rest of this chapter explains ABE further in depth.

The ABE functions can shorten your code, and almost every CLI argument is an ABE expression.
These expressions can directly refer to the context, such as a packet.
This makes ABE a powerful tool.

But knowing all of ABE's features it not required to use linkspace.

ABE a language of convenience.

With basic knowledge of its purpose to read and write separated bytes (i.e. [ [u8] ]), expression substitution ([..]), and the ':', '/' delimiters,
the rest of the guide (starting at Query) can be read while you return here for reference in case something is unclear.

There are two modes for tokenizing (before evaluation).

  • Strict: \n, \t, \r and bytes outside the range 0x20(SPACE)..=0x7e(~) are escaped.
  • Parse Unencoded: bytes outside 0x20..=0x7e are read 'as-is'.

Both error when bytes are incorrectly escaped or non-closed [, ] brackets exists.

Sub-expressions

A list of functions/macros be found by evaluating [[help][\[help\]]].

Functions
  • [fn]
  • [fn:arg0]
  • [fn:arg0:arg1]

The arguments are plain bytes. A function can take upto 8 arguments. Usually the results is concatenated with its surrounding bytes. The empty function '[:...]' resolves to its first argument.

  • hello [:world] == hello world

Arguments are evaluated before application. [fn0:[fn1]] will call fn1 and use its result as the first argument to fn0.

You can chain results with /. It uses the result as the first argument to the next function.

  • [:97/u8] = ~[u8:97]~ = a
  • [:97/u8/?u] = ~[?u:[u8:97]]~ = 97

You can think of ABE functions as a translation of conventional function calling.

[name:arg1:arg2] name(arg1,arg2)
[name:[other_name:argA]:arg2] name( other_name(argA) , arg2 )
[other_name:argA/name:arg2] name ( other_name(argA) , arg2 )

Functions are aware if they are first or not.
The vast majority of functions do not care.

[[:u8]:97] is explicitly not allowed. Variable function identifiers are conceptually interesting but practically begging for bugs.

Note: describing ABE can be a bit tricky in relation to conventional languages. Specifically, there is no syntax to "reference" a function, they are always resolved to their result. i.e. fn name(){..}; let x = name; let y = name(); has the () syntax to differentiate between calling a function or referencing a value. There are no 'variables' by design, because ABE is not meant to be used that way.

Macros

The second type of operation is applying a macro. Whereas functions are called after their arguments are evaluated. Macros are called as is up until its matching ']' without evaluation [..] expressions.

  • [/a_macro]
  • [/a_macro:arg0:arg1]
  • [/a_macro:[fn:arg0]:arg1/hello]

The /a_macro macro operates on :[fn:arg0]:arg1/hello without it being evaluated.

Scope & Context

Functions and Macros are defined in a scope. Scopes can be chained, so that if no matching function is found it looks in the next scope. The standard scope chain has multiple functions and macros to manipulate bytes. You can see all active scopes with the [help] function.

Sometimes the scope chain is extended with additional context:

Argv

A scope containing functions resolving to an argument vector.

inp = "Rm9ycmVzdA" # the base 64 encoding of the word "Forrest"
lk_eval("[0] [1/b], [0]!",argv=["Run",inp])
b'Run Forrest, Run!'
Packet

By providing a packet, the packet scope is added to the chain. This adds functions such as hash, group, spacename etc. These are bytes that you can use as arguments.

e.g [hash/?b] encodes the hash in base 64.

For convenience all packet fields accept 'str' and 'abe' as a first argument to print them in a default format.

[hash:str] [hash/?b]
[group:str] [group/?b]
[create:str] [create/?u]
[links_len:str] [links_len/?u]
 

The [/links:...] macro iterates over every link in a packet. It evaluates the inner scope for each link with setting the tag and ptr function.

pktf is eval that reads packets from stdin and puts them in scope.

lk link "::" -- tag1:[#:0] tag2:[#:pub] | \
    lk pktf "HASH:[hash/?b]\n[/links:TAG = [tag:str] PTR = [ptr:str] \n]"
HASH:SPnYSu0qyHk1GQ3LA6oM5VEGxIsmd4CkIq4eS4YFkG4
TAG = tag1 PTR = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 
TAG = tag2 PTR = Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
lp = lk_linkpoint(links=[Link("hello",PUBLIC),Link("world",PRIVATE)])
lk_eval2str("hash:[hash:str]\\n[/links:[tag:str] [ptr:str]\\n]",pkt=lp)
hash:j3lXxBtm4-ukATH4uybo5yAKuYuUuCpHOlPigVljFnM
hello Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
world AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Runtime

Having a linkspace runtime in the scope gives you access to functions like:

  • # and @ ( see LNS ) for named groups, keys, and other data
  • readhash

The first instance of lk_open is used in the default (thread local).

readhash is considered bad practice, fine to hack something together, but it doesn't give you much room to process errors or async. But you can do some wizardry combining it with [/links].

Usage notes

ABE expressions evaluate into a list of [ (?sep,bytes) ]. Sometimes each element has a different meaning, e.g. [ ( 0, domain ) , ( :, group) ] in the CLI arguments. You can process this list with lk_tokenize_abe.

But in the majority of cases we don't care about the list and only want a single result. lk_eval does just that. It interprets the separators as plain characters.

Finally, consider what you would expect to happen when an macro takes a ABE expression as its final argument:

  • [/links:abc[:hello]/world]
  • [/readhash:[#:pub]:the pkt:[pkt]]
  • [/:hello/world]

The choice was made that if the final argument is an abe expression that will be evaluated, it doesn't need wrapping []. Instead, it interprets the entire tail as is. This reduces the need to escape ':' and '/', but complicating some other expressions.

We can add an expression to –write arguments
lk link :: --write "stdout-expr:hello world:/ [hash:str]"
In case of file this leaves us in the situation that second argument is the file and the tail of the expression will be evaluated
One option is to use [/:..] to read ':' and '' as is.
~lk link :: –write "file-expr:[
:./afolder:with/colons]:hello world:/ [hash:str]"~

Help

A full list of active scopes can be viewed with the help function.

The following naming conventions are used:

- ending with '?' is a predicate to check a property.
- starting with '?' is a basic reverse operation. [u8:97/?u] == 97. Its similar but less powerful then lk_encode and lacking '[]' brackets.
- b_RADIX_ ( b2, b8, b16 ) 'b' defaults to base64 radix
- u_SIZE_ ( u8, .., u128 ) parse decimal into big endian bytes. ?u interpret as big endian print to decimal

lk_eval2str("[help]",pkt=lk_linkpoint(),argv=["hello"]) # the help won't show up if no scope is set. 
Each scope has functions and macros
For each function the option set  ['[' , '/' , '?'] is given
These refers to its use as:
 '['  => Can be used to open   '[func/..]'
 ':'  => Can be used in sequence '[../func]' (taking the left-side as first argument)
 '?'  => Can be encoded (i.e. 'reversed') to some extend '[../?:func]' || [?:..:func]

# netpkt field
get a field of a netpkt. also used in watch predicates.
## Functions
- netflags         [            0..=1     ?(str|abe) - netpkt.netflags  
- hop              [            0..=1     ?(str|abe) - netpkt.hop  
- stamp            [            0..=1     ?(str|abe) - netpkt.stamp  
- ubits0           [            0..=1     ?(str|abe) - netpkt.ubits0  
- ubits1           [            0..=1     ?(str|abe) - netpkt.ubits1  
- ubits2           [            0..=1     ?(str|abe) - netpkt.ubits2  
- ubits3           [            0..=1     ?(str|abe) - netpkt.ubits3  
- hash             [            0..=1     ?(str|abe) - netpkt.hash  
- type             [            0..=1     ?(str|abe) - netpkt.type  
- size             [            0..=1     ?(str|abe) - netpkt.size  
- pubkey           [            0..=1     ?(str|abe) - netpkt.pubkey  
- signature        [            0..=1     ?(str|abe) - netpkt.signature  
- group            [            0..=1     ?(str|abe) - netpkt.group  
- domain           [            0..=1     ?(str|abe) - netpkt.domain  
- create           [            0..=1     ?(str|abe) - netpkt.create  
- depth            [            0..=1     ?(str|abe) - netpkt.depth  
- links_len        [            0..=1     ?(str|abe) - netpkt.links_len  
- data_size        [            0..=1     ?(str|abe) - netpkt.data_size  
- spacename        [            0..=1     ?(str|abe) - netpkt.spacename  
- rspacename       [            0..=1     ?(str|abe) - netpkt.rspacename  
- comp0            [            0..=1     ?(str|abe) - netpkt.comp0  
- comp1            [            0..=1     ?(str|abe) - netpkt.comp1  
- comp2            [            0..=1     ?(str|abe) - netpkt.comp2  
- comp3            [            0..=1     ?(str|abe) - netpkt.comp3  
- comp4            [            0..=1     ?(str|abe) - netpkt.comp4  
- comp5            [            0..=1     ?(str|abe) - netpkt.comp5  
- comp6            [            0..=1     ?(str|abe) - netpkt.comp6  
- comp7            [            0..=1     ?(str|abe) - netpkt.comp7  
- data             [            0..=1     ?(str|abe) - netpkt.data  

# print pkt default

## Functions
- pkt              [            0..=0     default pk fmt  
- netpkt           [            0..=0     TODO default netpkt fmt  
- point            [            0..=0     TODO default point fmt  
- pkt-quick        [            0..=2     [add recv? =false , data_limit = max] same as pkt but without dynamic lookup  
- html-quick       [            0..=0     same as html but without dynamic lookup  
- netbytes         [            0..=0     raw netpkt bytes  

# select link

## Functions
- links            [            0..=4     [delim='\n',start=0,stop=len,step=1] - python like slice indexing  
- link             [            1..=1     [suffix] get first link with tag ending in suffix  
## Macros
- links            :{EXPR} where expr is repeated for each link binding 'ptr' and 'tag'  

# recv
recv stamp for packet. value depends on the context
## Functions
- recv             [            0..=1     recv stamp - returns now if unavailable in context  
- recv_now         [            0..=1     recv stamp - returns an error if not available in context  

# bytes
Byte padding/trimming
## Functions
- ?a               [/           1..=1     encode bytes into ascii-bytes format  
- ?a0              [/           1..=1     encode bytes into ascii-bytes format but strip prefix '0' bytes  
- a                [/?          1..=3     [bytes,length = 16,pad_byte = \0] - alias for 'lpad'  
- f                [/           1..=3     same as 'a' but uses \xff as padding   
- lpad             [/           1..=3     [bytes,length = 16,pad_byte = \0] - left pad input bytes  
- rpad             [/           1..=3     [bytes,length = 16,pad_byte = \0] - right pad input bytes  
- ~lpad            [/           1..=3     [bytes,length = 16,pad_byte = \0] - left pad input bytes  
- ~rpad            [/           1..=3     [bytes,length = 16,pad_byte = \0] - right pad input bytes  
- lcut             [/           1..=2     [bytes,length = 16] - left cut input bytes  
- rcut             [/           1..=2     [bytes,length = 16] - right cut input bytes  
- ~lcut            [/           1..=2     [bytes,length = 16] - lcut without error  
- ~rcut            [/           1..=2     [bytes,length = 16] - lcut without error  
- lfixed           [/           1..=3     [bytes,length = 16,pad_byte = \0] - left pad and cut input bytes  
- rfixed           [/           1..=3     [bytes,length = 16,pad_byte = \0] - right pad and cut input bytes  
- replace          [/           3..=3     [bytes,from,to] - replace pattern from to to  
- slice            [/           1..=4     [bytes,start=0,stop=len,step=1] - python like slice indexing  
- ~utf8            [/           1..=1     lossy encode as utf8  

# UInt
Unsigned integer functions
## Functions
- +                [/           1..=16     Saturating addition. Requires all inputs to be equal size  
- -                [/           1..=16     Saturating subtraction. Requires all inputs to be equal size  
- u8               [/?          1..=1     parse 1 byte  
- u16              [/?          1..=1     parse 2 byte  
- u32              [/?          1..=1     parse 4 byte  
- u64              [/?          1..=1     parse 8 byte  
- u128             [/?          1..=1     parse 16 byte  
- ?u               [/           1..=1     Print big endian bytes as decimal  
- lu               [/           1..=1     parse little endian byte (upto 16)  
- lu8              [/           1..=1     parse 1 little endian byte  
- lu16             [/           1..=1     parse 2 little endian byte  
- lu32             [/           1..=1     parse 4 little endian byte  
- lu64             [/           1..=1     parse 8 little endian byte  
- lu128            [/           1..=1     parse 16 little endian byte  
- ?lu              [/           1..=1     print little endian number  

# base-n
base{2,8,16,32,64} encoding - (b64 is url-safe no-padding)
## Functions
- b2               [/?          1..=1     decode binary  
- b8               [/           1..=1     encode octets  
- b16              [/?          1..=1     decode hex  
- ?b               [/           1..=1     encode base64  
- 2mini            [/           1..=1     encode mini  
- b                [/?          1..=1     decode base64  
- ?bs              [/           1..=1     encode base64 standard padded  
- bs               [/?          1..=1     decode base64 standard padded  

# comment function / void function. evaluates to nothing

## Functions
- C                [/           1..=16     the comment function. all arguments are ignored. evaluates to ''  

# help

## Functions
- help             [/           0..=16     help  
## Macros
- help             desribe current eval context  

# logic ops
ops are : < > = 0 1 
## Functions
- size?            [/           3..=3     [in,OP,VAL] error unless size passes the test ( UNIMPLEMENTED )  
- val?             [/           3..=3     [in,OP,VAL] error unless value passes the test ( UNIMPLMENTED)  
## Macros
- or               :{EXPR}[:{EXPR}]* short circuit evaluate until valid return. Empty is valid, use {_/minsize?} to error on empty  

# encode
attempt an inverse of a set of functions
## Functions
- eval             [/           1..=1     parse and evaluate  
- ?                [/           2..=8     encode  
- ??               [/           2..=8     encode - strip out '[' ']'  
- ???              [/           2..=8     encode - strip out '[func:' + ']'  
## Macros
- ?                find an abe encoding for the value trying multiple reversal functions - [/fn:{opts}]*   
- ~?               same as '?' but ignores all errors  
- e                eval inner expression list. Useful to avoid escapes: eg file:{/e:/some/dir:thing}:opts does not require escapes the '/'   

# static-lns
static lns for local only [#:0] and public [#:pub]
## Functions
- #                [ ?          1..=16     resolve #:0 , #:pub, and #:test without a db  
- @                [ ?          1..=16     resolve @:none  

# microseconds
utilities for microseconds values (big endian u64 microsecond since unix epoch)
arguments consists of ( [+-][YMWDhmslu]usize : )* (str | delta | ticks | val)?

## Functions
- us               [/           0..=16     if chained, mutate 8 bytes input as stamp (see scope help). if used as head assume stamp 0  
- now              [            0..=16     current systemtime  
- epoch            [            0..=16     unix epoch / zero time  
- us++             [            0..=16     max stamp  

# space
space utils. Usually [//some/space] is the most readable
## Functions
- ?space           [/           1..=1     decode space  
- si               [/           2..=3     space idx [start,?end]  
- s                [/?          1..=8     build space from arguments - alternative to [//some/path] syntax  
## Macros
-                  the 'empty' eval for encoding space. i.e. [//some/space/val] creates the byte for /some/space/val  
- ~                similar to '//' but forgiving on empty components  

# lns

## Functions
- #                [ ? <partial> 1..=7     (namecomp)* - get the associated lns group  
- ?#               [/           1..=1     find by group# tag  
- @                [ ? <partial> 1..=7     (namecomp)* - get the associated lns key  
- ?@               [/           1..=1     find by pubkey@ tag  
## Macros
- lns              [:comp]*/expr  

# private-lns
Only look at the private claims lookup tree. Makes no requests
## Functions
- private#         [ ?          1..=7     (namecomp)* - get the associated lns group  
- ?private#        [/           1..=1     find by group# tag  
- private@         [ ?          1..=7     (namecomp)* - get the associated lns key  
- ?private@        [/           1..=1     find by pubkey@ tag  
## Macros
- private-lns      [:comp]*/expr  

# filesystem env
Reading files from "/tmp/linkspace/files"
## Functions
- files            [/           1..=1     read a file from the LK_DIR/files directory  

# database
get packets from the local db.
e-funcs evaluate their args as if in pkt scope.
funcs evaluate as if [/[func + args]:[rest]]. (e.g. [/readhash:HASH:[group:str]] == [readhash:..:group:str])
## Functions
- readhash         [/           1..=16     open a pkt by hash and use tail args as if calling in a netpkt scope  
- read             [/           2..=16     read but accesses open a pkt by dgpk space and apply args. e.g. [read:mydomain:[#:pub]:[//a/space]:[@:me]::data:str] - does not use default group/domain - prefer eval scope  
## Macros
- readhash         HASH ':' expr (':' alt if not found)   

# Unset<abe::eval::EScope<linkspace_common::eval::OSEnv>>


# user input list
Provide values, access with [0] [1] .. [7] 
## Functions
- 0                [            0..=0     argv[0]  
- 1                [            0..=0     argv[1]  
- 2                [            0..=0     argv[2]  
- 3                [            0..=0     argv[3]  
- 4                [            0..=0     argv[4]  
- 5                [            0..=0     argv[5]  
- 6                [            0..=0     argv[6]  
- 7                [            0..=0     argv[7]  

lk_encode

Translate bytes into abe such that lk_eval(lk_encode(X)) == X

We can get meta. lk_encode is available as the macro [/?:bytes:options]

data = bytes([0,0,0,255])
abe = lk_encode(data)
assert data == lk_eval(abe)
print("ab  text:", abe)
abe = lk_encode(data,"u8/u32/b") # Try to encode as expression
print("abe text:", abe)
ab  text: \0\0\0\f
abe text: [u32:255]

DEFAULT_FMT

This is how packets are printed by default using lk pktf or pythons str(pkt).

import linkspace
print(linkspace.DEFAULT_PKT)
type\t[type:str]\nhash\t[hash:str]\ngroup\t[/~?:[group]/#/b]\ndomain\t[domain:str]\nspace\t[spacename:str]\npubkey\t[/~?:[pubkey]/@/b]\ncreate\t[create:str]\nlinks\t[links_len:str]\n[/links:\t[tag:str] [ptr:str]\n]\ndata\t[data_size:str]\n[data/~utf8]\n

lk_tokenize_abe

LNS

LNS is a system for publicly naming keys and groups, and adding auxiliary data to them. It allows you to register as @:Alice:nl, #:sales:company:com, etc.

LNS is easy to use from an abe expression. Both to lookup and do a reverse lookup.

You can create local bindings, allowing you to reference [@:my_identity:local] or [#:friends:local]
By default lk_key sets up the [@:me:local] identity.

lk eval "[#:pub]" | lk encode "@/#/b"
lk eval "[@:me:local]" | lk encode "@/#/b"
group = example_keypoint.group
print("The bare bytes:", group)

# encode as b64
b64 = lk_encode(group,"b")
print("b64 encoded   :", b64)

# Try to express as a [#:..], on failure try as [@:..], fallback to [b:...]
try_name = lk_encode(group,"#/@/b")
print("Or through lns:", try_name)

print("Pkt's pubkey  :",example_keypoint.pubkey)
try_keyname = lk_encode(example_keypoint.pubkey,"#/@/b")
print("Similarly lns :", try_keyname)


The bare bytes: b'b\xbb;\x8b=\xd5\xceu\xe1\xfa\x88/\xe1\xa3:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19'
b64 encoded   : [b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]
Or through lns: [#:pub]
Pkt's pubkey  : b'g\xc7\x8f\xe9>\xa7@\xc1\xd1\x9a\x1dsD8\xc0\x96C\x97{\xcf\xaf\xe1\xda\x9eO\xc5\xf45$W\xb0\x1a'
Similarly lns : [@:me:local]

Query

Rust docs

A query is a list of predicates and options used to define a set of packets. They're used in various ways, most notably you can use them to read (lk_get, lk_get_all), await (lk_watch) and request (lk_pull) packets.

lk_query

Queries are newline separated. Predicates are an ABE 3-tuple field ':' test-operation ':' value and constrain the set of accepted packets. Options are context dependent and start with ':'

A query might look like this:

group:=:[#:pub]
domain:=:example
spacename:=:/hello/world
pubkey:=:[@:me:local]
create:>:[now:-1D]

A predicate can be set multiple times. In the example above we could add create:<:[now:+2D] to constrain it further. Queries are designed such that you can concatenate their strings and get their union. If the result is the empty set an error is returned.

There are 4 basic test operations and a couple of aliases.

Basic Op  
> greater eq
< less eq
0 all '0' in value are '0' in field
1 all '1' in value are '1' in field

The following are shorthand and resolve to one or more of the basic tests.

Derived Ops  
= >(val-1) and <(val+1)
>= >(val-1)
<= <(val+1)
*= Last n-bytes must eq val
=* First n-bytes must eq val

The CLI has options that can act as a guide in creating queries by using lk print-query --help.

Many cli commands (e.g. print-statemnt, watch ) take as the first argument a domain:group:spacename:(?depth)
If no depth is set the depth is constraint by default.
Except for watch-tree which sets the depth to unconstrained by default

Here we look for the domain 'my' , the group [#:pub], with spacename starting at /hello and with one additional spacename component.

lk print-query "my:[#:pub]:/hello:*" --signed
:mode:tree-desc

type:1:[b2:00000111]
domain:=:[a:my]
group:=:[b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]
prefix:=:/hello
depth:<:[u8:3]
depth:>:[u8:0]
template = lk_query_parse(lk_query(),"group:=:[#:pub]")
a_copy = lk_query(template)
lk_query_print(a_copy)

type:1:\x02
group:=:b\xbb;\x8b=\xd5\xceu\xe1\xfa\x88\/\xe1\xa3\:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19

lk_query_parse

Add multiple constraints to a query. You can add multi line strings or per line. Each line is evaluated as an abe expression. You can set a pkt or argv context.

Returns an error if the resulting set is empty. The full list of predicates and their byte size can be found here.

q = lk_query()

stmt = """
group:=:[#:pub]
domain:=:example
"""

q = lk_query_parse(q,stmt,
               "depth:<:[u8:4]",
               "data_size:<:[0]",argv=[int(10).to_bytes(2)]) 
lk_query_print(q,True)

type:1:[b2:00000011]
domain:=:[a:example]
group:=:[b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]
data_size:<:[u16:10]
depth:<:[u8:4]

lk_query_push

Similar to lk_query_parse, but only adds a single statement and the last field expects the bytes.

q = lk_query()
q = lk_query_push(q,"data_size","<",bytes([0,4])) # less than 4
q = lk_query_push(q,"data_size","<",lk_eval("[u16:20]"))  # less than 20
q = lk_query_push(q,"data_size","<",int(3).to_bytes(2))  # less than 3
lk_query_print(q)

type:1:\x01
data_size:<:\0\x03

Adding a contradictions returns an error.

try:
  r = lk_query_push(q,"data_size",">",bytes([0,100])) # greater than 100 and smaller than 3 can not both be true
except Exception as e :
  r = ("That's not possible",e)
r

("That's not possible", RuntimeError("Error adding rule 'data_size'\n\nCaused by:\n    0: data_size:>:[u16:100]\n    1: incompatible Greater 100"))

lk_query_print

Print a query as text. The query will have merged overlapping predicates The boolean argument sets whether to create abe expressions or stick to a representation without expressions.

lk_query_print(q,True)

type:1:[b2:00000001]
data_size:<:[u16:3]

The b2 function read a binary representation.
The types are: datapoint=[b2:0000_0001], linkpoint [b2:0000_0011], keypoint [b2:0000_0111].
Setting 'group', 'domain', 'spacename', 'links', or 'create' predicates automatically exclude the datapoint type.
Setting pubkey or signature excludes link and data points.

More on predicates

group requires 32 bytes but will try to parse base64.
domain requires 16 bytes but will prepend '\0' if too few bytes are given
spacename and prefix only take the = op. Their value is the bytes spacename:=:[//hello/world], but they'll accept /hello/world as well

Besides the fields in a point, predicates also apply to the hash and variable net header fields.

The netheader fields can be mutated, and are stored in the database when a packet is first written. Domain applications should avoid these fields. They are used when writing an exchange process. (For more notes on that see dev/exchange.md)

The netheader is 32 bytes which are named:

   
Fieldsize  
Prefix3 magic bytes 'LK1'
NetFlags1 See source code
hop2 number of hops since creation
stamp8  
ubits04  
ubits14  
ubits24  
ubits34 RESERVED

Except for the prefix and ubits3.

Recv

The final predicate is 'recv'. This is a 8 byte stamp for when the packets was first read. It can be used to filter such as recv:>:[now:-1D].

It is considered bad design for applications to depend on this. They should use the create stamp to avoid depending on any group-exchange specific behavior.

The recv predicate depends on the context. Reading from the database the recv is set to the time the packet was received: lk watch-log --bare -- "recv:>:[now:-1D]" But when reading from a pipe it is set to when the pipe reads the packet: lk watch-log | lk filter "recv:>:[now:-1D]"

In both cases the predicate recv:<:[now:+1m] would stop the process after 1 minute.

Options

Options are additional configurations for a set of predicates.

They are of the form ':' name ':' rest. Depending on the context/function they are ignored. Developers are free to expand their meaning for their usecase like in a group exchange processes.

Unlike predicates, options are not associative. The order they are joined matters. When using lk_query_push or lk_query_parse, the options are added to the front of the string. i.e. finding the first matching option is done by reading from top to bottom. Extensions should clearly define the meaning when multiple appear.

NOTE: These options will change somewhat in coming versions.

name value use multiple
:mode: (tree/hash/log)-(asc/desc) set the index to read from Top most is used
:follow   also output linked packets N/A
:qid: <any> identity/close an active query Top most is used
:notify-close   send a final dummy packet when closing a query N/A

Known predicates & options

The full list of options and predicates:

Current set of predicates and options.

Unknown options are added but ignored by most library functions. This allows other processes to add additional options it understands.

lk print-query --help
hash         - the point hash e.g. \[b:AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\]
group        - group id e.g. \[#:pub\]
domain       - domain - if fewer than 16 bytes, prepadded with  e.g. \[a:example\]
prefix       - all points with spacename starting with prefix - only accepts '=' op e.g. /hello/world
spacename    - exact spacename - only accepts '=' op e.g. /hello/world
pubkey       - public key used to sign point e.g. \[@:me:local\]
create       - the create stamp e.g. \[now:-1H\]
depth        - the total number of space components - max 8 e.g. \[u8:0\]
links_len    - the number of links in a packet e.g. \[u16:0\]
data_size    - the byte size of the data field e.g. \[u16:0\]
recv         - the recv time of a packet e.g. \[now:+1D\]
i_branch     - total packets per uniq (group,domain,space,key) - only applicable during local tree index, ignored otherwise e.g. \[u32:0\]
i_db         - total packets read from local instance e.g. \[u32:0\]
i_new        - total newly received packets e.g. \[u32:0\]
i            - total matched packets e.g. \[u32:0\]
hop          - (mutable) number of hops e.g. \[u16:5\]
stamp        - (mutable) variable stamp e.g. \[now\]
ubits0       - (mutable) user defined bits e.g. \[u32:0\]
ubits1       - (mutable) user defined bits e.g. \[u32:0\]
ubits2       - (mutable) user defined bits e.g. \[u32:0\]
ubits3       - (mutable) user defined bits e.g. \[u32:0\]
type         - the field type bits - implied by other predicates e.g. \[b2:00000001\]
netflags     - (mutable) netflags e.g. \[b2:00000000\]
size         - exact size of the netpkt when using lk_write or lk_read - includes netheader and hash  e.g. \[u16:4\]

The following options are available

	:mode
	:qid
	:follow
	:notify-close


query - print full query from common aliases

Usage: lk print-query [OPTIONS] [DGPD] [-- <EXPRS>...]

Arguments:
  [DGPD]      
  [EXPRS]...  

Options:
  -p, --print-expr               print the query
      --print-text               print in ascii-byte-text format (ABE without '[..]' expressions)
      --mode <MODE>              [default: tree-desc]
      --db-only                  only match locally indexed pkts           | `i_new:=:[u32:0]`
      --new-only                 only match new unindexed pkts             | `i_db:=:[u32:0]`
      --max <MAX>                match upto max packets.                   | `i:<:[u32:max]`
      --max-branch <MAX_BRANCH>  match upto max per (dm,grp,space,key) key | `i_branch:<:[u32:max_branch]`
      --max-index <MAX_INDEX>    match upto max from local index           | `i_db:<:[u32:max_index]`
      --max-new <MAX_NEW>        match upto max unindexed pkts             | `i_new:<:[u32:max_new]`
      --signed                   match only signed pkts                    | `pubkey:>:[@:none]`
      --unsigned                 match only unsigned pkts                  | `pubkey:=:[@:none]`
      --watch                    Add :qid option (generates qid)
      --qid <QID>                set :qid option (implies --watch)
      --follow                   Add :follow option
      --until <UNTIL>            add `recv:<:[us:INIT:+{until}]` where INIT is set at start
  -b, --bare                     do not read any domain:group:space argument - WARNING - this might include all datapoints depending on mode and filters
  -h, --help                     Print help

General Pkt IO Options:
      --private  enable io of linkpoints in [#:0] [env: LK_PRIVATE=]

lk_hash_query

[This function might be removed]

A shorthand for getting a packet by hash. Can be used with lk_watch. If you expect the value to be known locally lk_get_hash is faster.

Runtime

Rust docs

You can open/create a instance with lk_open. If given no directory it opens $LK_DIR or $HOME/linkspace.

An instance is a handle to a multi-reader, single-writer database. The instance is thread local. Each thread or process requires calling lk_open.

An application can save to the database (lk_save). To read from the database you can directly get a packet by hash (lk_get_hash) or using a query (lk_get, lk_get_all).

A thread can register a watch (lk_watch). A watch is a query & function that is called for each new packet matching the query. By default the function is immediatly called for each match in the database.

Whenever any thread saves a new packet all other threads receive a signal.

By default nothing happens. A thread does not see the new packets untill its read transaction is updated. This includes calls to lk_get* and lk_watch.

Update the transaction and process all the new packets (lk_process,lk_process_while). This runs all functions registered as a watch. Only after that is the thread's view of the database updated to the latest state.

The Rust docs are currently the most up to date, but the python package has most functions typed and commented as well.

lk_open

lk_save

lk_get

lk_get_all

lk_get_hash

lk_watch

lk_process

lk_process_while

lk_close_watch

Conventions

Rust docs

Conventions are functions built on top of the other linkspace functions. They provide a standard way for unrelated processes to loosly couple/interface with each other by encoding data into linkspace packets.

Generally they require the caller to also run lk_process or lk_process_while

One general conventions is that domains and spacenames starting with \xff are for meta things such as status queries and packet exchange.

lk_status_set

Status queries allow us to communicate if a process exists that is handling a specific type and a specific instance.

The function signature is (domain, group, obj_type, instance).

  • A request is a packet in the form DOMAIN:[#:0]:/\fstatus/GROUP/type(/instance?) and has no data and no links.
  • A reply is of the form DOMAIN:[#:0]/\status/GROUP/type/instance with some data and at least one link.

Note that the packets are in `#:0`. This function is only for local status updates.

The group argument does not ask inside GROUP, it only signals which group the query is about. Other processes are meant to answer a request.

The following are statuses that the exchange process should set:

  • exchange GROUP process
  • exchange GROUP connection PUBKEY
  • exchange GROUP pull PULL PULL_HASH

lk_status_poll

Request the status of a `domain group obj_type ?instance timeout`.

lk_pull

A pull request is made by a domain application and signals the set of packets it wants. The function takes the query and saves it as: [f:exchange]:[#:0]:/pull/[query.group]/[query.domain]/[query.qid]

Note that from a domain's perspective, there is no such thing as 'fully synchronized'.
It is entirely up to the developer to structure their points such that it provides the right level of sync.
For example, a 'log' packets that link to known packets from a single device's perspective.

Pull queries must have the predicates domain:=:.. and group:=:.., and :qid.

An exchange process (such as in the tutorial) watches these packets and attempts to gather them. The exchange is only responsible for pull requests received when it is running. The exchange drops requests when you reuse the 'qid'. The function returns the hash of the request.

A domain application should be conservative with its query. Requesting too much can add overhead.

lk_key

Read ( or creates ) an encrypted private key using the local LNS. It can then be referenced with [@:NAME:local].

LNS

See abe#lns for how to use LNS for lookup and reverse lookup.

The LNS system works by making a claim in lns:[#:pub]:/claim/test/example/john which we'll call $Claim1 A claim can have 3 types of special links. The first link with the tag pubkey@ has as ptr the pubkey bytes to use when referring to @:john:example:test. The first link with the tag group# has as ptr the group bytes to use when referring to #:john:example:test. Every tag ending with '^' e.g. root_00^ is an authority public key. An authority has the right to vote for its direct subclaims. For example the claim lns:[#:pub]:/claim/test/example/john/home/

$Claim1 becomes 'live' when a single authority of claim/test/example creates a vote by creating a keypoint lns:[#:pub]:/claim/test/example/john with the link vote:$Claim1.hash. The first claim to get a majority of votes wins.

Advanced topics

Big data

One way of reading/writing data larger than aprx 216 is to create a linkpoint with multiple ("data",datap_hash). That would give you space for 85mb. You can go infinitely large by adding ("continue",next_linkpoint).

python -c 'print("-" * 200000000,end="")' \
    | lk data \
    | lk collect example:: --create [epoch] --collect-tag 'data' --chain-tag 'continue'\
    | lk save -f \
    | lk filter example:: \
    | lk pktf "[hash:str] [links_len:str]"
JIhVzkxzP5kfPhjdudu0a6nDcbnOGKixsEvd4SXeoNc 1333
KxvjqFFRCRDFpzyNsXr80K3wMkBUD9Sszhfcjpt9qro 1334
1FAKPrMTY8nBQ4zeoXVvgn_ESQJcjJQ8l9fJDSmDEgA 411

Advanced tip: we could have made this shorter with collect --forward db --write db --write stdout-expr "[hash:str] [links_len]"

Note that to recreate the data you have to do a 'depth first' search starting from the last result

lk watch-hash 1FAKPrMTY8nBQ4zeoXVvgn_ESQJcjJQ8l9fJDSmDEgA | lk get-links -R pause | lk pktf -d '' [data] > /tmp/bytes
python -c 'print("-"*200000000,end="")' > /tmp/bytes2
diff /tmp/bytes /tmp/bytes2 && echo ok
ok

Note the pktf -d '' to remove the delimiter between packets.
An alternative approach would be to use lk get-links --write file-expr:[/:/tmp/bytes]:[data] --forward null

Q&A

Why Big Endian?

The tree index is in the expected order when using the numbers as space components. E.g. lk linkpoint ::/some/dir/[now] will come after lk linkpoint ::/some/dir/[now:-1D] because now > (now - one day)

Every user of my domain app needs X from my server/I want to add advertisements to my domain app.

Hardcode a public key into the app and combine it with a group exchange service. Either use an existing group, or use the group: their-key XOR your-key for personalized stuff

I'm not in control of the user! / Anybody in my group can leak data from it!?

If this is news to you, you're suffering from security-theater. I don't make the rules, I just make them obvious.

A domain application can write outside its own domain space.

Yes, the current API has no restriction. Maybe at some point we can effectively restrict processes through wasm or some other access control.

Why don't queries support negative predicates?

In most cases the meaning would be non obvious and/or slow to implement, and it would remove their "string concat == union" property.

Furthermore, when you want to exclude something it is much clearer to always have to define two phases. i.e. All packets in group X, excluding those with property Y.

Why not use an SQL backend? / Why create Queries?

First off, if you're asking because you want to do SQL queries its not too difficult to stream packets into a SQL table with `lk pktf` or some custom code, and query them.

Why its not the primary backend/query method has multiple reason.

SQL isn't magic, and its non-trivial price to pay for something that is not a great fit for a few fundamental problems including:

  1. What are the tables peers should have?
  2. How to constrain a query you receive/as it travels to multiple peers?
  3. how to encode bytes?

All could be solved in a number of ways, but most solutions are quickly going to bloat and usually create multiple incompatible sublanguages depending on the context.

linkspace queries support arbitrary bytes, can be constraint/tested through concatenation, and have a consistent meaning w.r.t. predicates and can easily be expanded with options.

Footnotes:

1

the hash of lk_datapoint(b"Hello, Sol!\n")

Created: 2023-11-04 Sat 19:03