Compare commits

..

180 commits

Author SHA1 Message Date
77825d652f fix
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-09-26 18:15:32 -05:00
439eabd69f add rss auto discovery
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-09-26 18:14:27 -05:00
7a1b2b9787 trying to get typst rendering working for svgs
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-09-26 18:09:45 -05:00
8667d33798 undraft
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-20 13:29:12 -05:00
b33d601900 update to use mathsf
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-20 13:18:22 -05:00
55ec917eb8 make stuff line up more
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-20 13:12:58 -05:00
d344924a1d make the title image redirect to about
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-20 12:53:27 -05:00
bc92b2184b grammar
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-19 17:31:38 -05:00
2d92966043 tentative dotted background
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 16:37:23 -05:00
4273a1ca0b add the j
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 16:27:15 -05:00
c9c6365c87 fix inner links
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 16:22:37 -05:00
67e9a31def initial draft done
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 16:05:13 -05:00
89abd4ff02 wip
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 15:31:29 -05:00
f166392f35 more blog post wip
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 14:43:34 -05:00
3e4f2d0095 image :)
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 06:54:28 -05:00
79e8a2300a it works
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 06:38:08 -05:00
2f4c87ea1d shortbio 2024-09-18 06:32:18 -05:00
37d59e6927 more wip
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 06:01:37 -05:00
0dd553ad71 add diagrams
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 05:21:04 -05:00
71cf0079dc start hcomp post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 04:47:52 -05:00
ce5ef4116d update bio
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 01:18:17 -05:00
aeef05a47f faq page wip
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-18 01:14:15 -05:00
198f3727e2 wip
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-16 03:06:17 -05:00
764351ccb1 rip
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-15 19:42:46 -05:00
16c5e3ab81 wip
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-09-15 19:38:05 -05:00
67bb9a196c Test rendering this page
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-15 18:31:24 -05:00
66de827e37 updates
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-14 23:57:48 -05:00
509fd14440 change background to be bluer
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-14 23:43:39 -05:00
55da20bde0 bio line height:
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-14 23:41:41 -05:00
9d073a0be6 update saturation
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-14 23:40:35 -05:00
126f3357bb update the warning box color
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-14 23:36:52 -05:00
4a72c2ea7b pijul post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-10 17:59:54 -05:00
482189f355 lc post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-10 06:31:00 -05:00
d3d9145610 some fixes
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-10 05:28:25 -05:00
772c46f77b update short bio
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-07-26 12:50:35 -05:00
69c17a7a7a new pic who dis
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-07-22 13:31:07 -05:00
3949f19f1f update short bio
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-07-09 16:16:58 -05:00
77b3323fc6 unignore markdown files
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-30 20:36:00 -05:00
76ea2d345a undraft
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-29 17:39:27 -05:00
06d483b3dd Change the way admonitions work
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-29 15:15:48 -05:00
bbfe4f67ca More updates 2024-06-29 15:15:42 -05:00
9318880a9a Remove allow unused metas 2024-06-29 14:53:45 -05:00
83ceb81f8a
a
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-29 14:05:06 -05:00
043b3ebe74 new blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-28 20:48:21 -05:00
531b33442d boolean equivalences
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-28 17:46:08 -05:00
85c10d012b fix agda colors
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-27 10:27:22 -05:00
e0ff5bf910 agda
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-27 00:43:35 -05:00
2b4ca03563 render primitives
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-27 00:22:46 -05:00
c1d92f48f9 Fix space
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-26 21:04:24 -05:00
726554826a fix deploy
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-26 20:44:17 -05:00
6324b12c33 Update builder
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-26 20:38:45 -05:00
b574a929a7 Update
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-26 20:29:52 -05:00
68823357ab Add sharp
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-26 19:50:00 -05:00
4f88615c31 Update builder
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-26 19:47:02 -05:00
38e2a8cec5 Update dependencies 2024-06-26 19:42:02 -05:00
7375f9c81b agda building!
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-26 19:17:01 -05:00
a8f1ce9acd agda plugin 2024-06-26 18:18:47 -05:00
b5a5f9cf0a nix 2024-06-26 18:18:25 -05:00
65e5471c3c ci
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-21 16:06:43 -05:00
5635d03e08 update bio
All checks were successful
ci/woodpecker/manual/woodpecker Pipeline was successful
2024-06-21 15:57:44 -05:00
4fb464325c update th'bio
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-21 01:09:32 -05:00
63d837b264 event: push 2024-06-21 01:00:23 -05:00
0d70a68769 Revert "pudate"
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
This reverts commit 435ec21f6e.
2024-06-21 00:58:59 -05:00
5d631561b5 refactoring post
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-21 00:57:37 -05:00
435ec21f6e pudate
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-06-11 13:54:47 -04:00
dfbdf2d4ff update path
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-11 10:58:14 -04:00
216b1c35ed show left
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-11 10:53:37 -04:00
3fcaeccf48 logical relations
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-06-11 10:52:37 -04:00
d85b7f729f update 2024-05-29 13:33:23 -05:00
22a8c36fff add biome config 2024-05-29 13:32:27 -05:00
6224860b0a add goatcounter
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-05-03 16:08:54 -05:00
4fc2c3e589 Update src/content/posts/2024-05-02-ddr/index.mdx
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-05-03 18:13:56 +00:00
85ce708153 update
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-05-03 06:35:52 -05:00
71481acbf1 pipeline
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-05-03 06:26:42 -05:00
f913eb52e2 ddr 2024-05-03 06:22:47 -05:00
b402ee102f add comments
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-05-03 05:00:04 -05:00
bc8cb94181 tmp ignore
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-05-02 15:51:23 -05:00
1cb392dbae woodpecker
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-05-02 15:48:55 -05:00
9bb4c462d8 update 2024-05-02 15:06:19 -05:00
1d48472aa2 start ddr post 2024-05-02 13:00:04 -05:00
1d3a17937d git attributes 2024-04-21 18:24:14 -05:00
cbc83ae11d what the hell 2024-04-20 18:22:15 -05:00
31d4b1fb71 update readme 2024-04-20 01:14:46 -05:00
1f454d6883 lambda calc post 2024-04-20 01:14:46 -05:00
42cbda6ae1 fix the header issue 2024-04-20 01:14:46 -05:00
1f8aac0a5e use alpine/edge 2024-04-20 01:14:46 -05:00
d4fe025437 typst 2024-04-20 01:14:46 -05:00
90f7fa2ee9 home to postlist 2024-04-20 01:14:46 -05:00
ea600b05f7 make the lines more condensed 2024-04-20 01:14:46 -05:00
62b4c105a9 style 2024-04-20 01:14:46 -05:00
565de50eb3 change colors 2024-04-20 01:14:46 -05:00
f048ce45ac remove box shadow 2024-04-20 01:14:46 -05:00
56c95f6051 bio 2024-04-20 01:14:46 -05:00
4768a49fce css 2024-04-20 01:14:46 -05:00
4dafc1b3e8 sourcehut publish 2024-04-20 01:14:46 -05:00
e1cb9b5e0f sudo 2024-04-20 01:14:46 -05:00
226264f124 node > npm 2024-04-20 01:14:46 -05:00
dbbcc3aae7 build 2024-04-20 01:14:46 -05:00
Michael Zhang
32098a3278 Add 'public/. well-known/atproto-did'
Some checks failed
ci/woodpecker/manual/woodpecker Pipeline failed
2024-02-29 09:06:48 -06:00
901fb1c005 a 2024-02-05 15:18:35 -06:00
f8e04d7342 install pnpm
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-24 14:05:49 -05:00
0adaf0c122 try node 20
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-24 14:05:09 -05:00
da64309feb undraft path induction post
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-24 14:02:45 -05:00
d70327ffb9 tags on home page
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-24 14:00:40 -05:00
4895546226 minor: parameter -> index
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-24 11:02:19 -05:00
396e1f5098 not ish
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-23 22:22:36 -05:00
025437ca10 more sane date format
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-23 21:50:45 -05:00
bd63dba9df use the original names
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-23 21:38:28 -05:00
5216167465 path induction post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-23 21:26:47 -05:00
f50ec75e01 comment out all the other build stuff
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-12 18:34:57 -05:00
0e0249d113 sad
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-12 17:22:58 -05:00
92c7982c0c wtf 2023-10-11 18:01:49 -05:00
183c998199 z
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:43:51 -05:00
fe2377b0b4 z
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:43:31 -05:00
0c3659055a z
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:42:41 -05:00
8670e8cf7b a
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:40:21 -05:00
3e07267470 zz
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:38:07 -05:00
9ecc011029 a
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:36:46 -05:00
e22340a98d test
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:34:34 -05:00
948c982418 add packages
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:30:50 -05:00
b33ba945b6 a
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:27:56 -05:00
5319b38560 build
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 17:19:13 -05:00
1e5bef0b6f use agda
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 16:17:08 -05:00
efd0b420dd build agda
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 16:16:32 -05:00
11e1601426 builder
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 14:46:16 -05:00
1ff3499e81 rip
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 13:42:22 -05:00
43a56c941a docker shit
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 13:09:15 -05:00
3935a8934c agda fix
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 12:47:18 -05:00
3833a310e0 render agda
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-11 11:48:34 -05:00
334e6cb1bf add edit history
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-04 09:16:04 -05:00
b36a272815 enable syntax highlighting
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-10-02 17:36:09 -05:00
263ed46253 time to bury this forever 2023-09-21 17:40:43 -05:00
328680c54b equality notes
Some checks failed
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/manual/woodpecker Pipeline failed
2023-09-15 01:42:41 -05:00
f9e388eb6e fix ordering issue 2023-09-15 00:35:19 -05:00
109c2997c3 border radius
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-08 17:24:51 -05:00
2d7abd47d0 dont be so verbose
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-08 05:15:25 -05:00
99e0e5ebab push
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-09-08 05:12:55 -05:00
f497f3536a compiler? 2023-09-08 05:10:52 -05:00
6bb860f61a oops
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 22:32:18 -05:00
ab1597a28b add utterances
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-09-01 22:25:11 -05:00
c4b29c5e6f FUCK
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 21:38:53 -05:00
60db0faba3 fix width
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 21:37:13 -05:00
89ece03307 remove prints 2023-09-01 20:59:39 -05:00
a670600ad7 scroll spy
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 20:59:23 -05:00
e3ded78ba3 make toc look better
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 19:30:15 -05:00
13d83842aa fix padding again
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 19:26:23 -05:00
424c056a09 fix padding
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 19:25:56 -05:00
3bf0200e7e wip toc
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 12:38:59 -05:00
e94fee5345 latexify
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 12:05:29 -05:00
49cfb3ccc4 no extra padding on left on mobile
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 10:12:29 -05:00
8e1fac9bd5 admonitions
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 10:09:01 -05:00
5ecc5f8eed update
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 09:24:12 -05:00
b47c8ffae3 cek post revive
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 09:16:36 -05:00
0627d6b0d9 author
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 08:02:15 -05:00
2122aca697 keywords
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 08:01:19 -05:00
4a03a76348 more opengraph metadata
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-09-01 07:59:21 -05:00
0ef4b692e4 networking?
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 23:15:54 -05:00
88fd13a01a code blocks
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 22:45:24 -05:00
2a1ebcf251 intro
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 22:38:49 -05:00
edcbc68af3 changes
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 22:29:45 -05:00
6468b29dd7 minor edits
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 22:25:25 -05:00
62ad559dbd sidebar styling
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 22:04:16 -05:00
c7c8be95b5 image borders
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 21:59:12 -05:00
b61a54de22 bigger title
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 21:53:16 -05:00
f3d89cfb61 more changes
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 21:52:21 -05:00
83d21cf9b6 remove more technical stuff
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 21:41:52 -05:00
b29d3dda82 mermaid
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 19:10:20 -05:00
8c5c341f44 margin is too big
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 15:17:40 -05:00
35f6d93ca0 about page
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 14:25:04 -05:00
0c6f3ff48a wording
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 14:14:48 -05:00
322441a144 metadata
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 10:37:26 -05:00
b892dfa000 what
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 10:33:26 -05:00
0c2ac5e521 logseq post 2023-08-31 10:33:14 -05:00
10426919e1 include drafts
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 09:05:06 -05:00
86feeefbe3 remove brackets
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 08:53:36 -05:00
2c1b3c5677 blockquote
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 08:47:19 -05:00
e8f1437d95 no padding on links
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 08:46:08 -05:00
e2cc513dd4 a
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 08:39:21 -05:00
c478116f88 fill in site 2023-08-31 03:17:02 -05:00
a3a7d84d1e pragmata
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-31 03:09:41 -05:00
123 changed files with 12433 additions and 6620 deletions

28
.build.yml Normal file
View file

@ -0,0 +1,28 @@
image: alpine/edge
oauth: pages.sr.ht/PAGES:RW
packages:
- hut
- npm
- rsync
- typst
environment:
site: mzhang.io
secrets:
- 0b26b413-7901-41c3-a4e2-3c752228ffcb
sources:
- https://git.sr.ht/~mzhang/blog
tasks:
- install: |
sudo npm install -g pnpm
- build: |
cd blog
pnpm install
pnpm run build
# hugo --buildDrafts --minify --baseURL https://mzhang.io
- upload: |
cd blog/dist
tar -cvz . > ../site.tar.gz
cd ..
hut pages publish -d $site site.tar.gz
# echo "mzhang.io ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBzBZ+QmM4EO3Fwc1ZcvWV2IY9VF04T0H9brorGj9Udp" >> ~/.ssh/known_hosts
# rsync -azvrP dist/ sourcehutBuilds@mzhang.io:/mnt/storage/svcdata/blog-public

1
.dockerignore Normal file
View file

@ -0,0 +1 @@
*

View file

@ -6,4 +6,4 @@ insert_final_newline = true
trim_trailing_whitespace = true trim_trailing_whitespace = true
charset = utf-8 charset = utf-8
indent_style = space indent_style = space
indent_size = 4 indent_size = 2

2
.gitattributes vendored Normal file
View file

@ -0,0 +1,2 @@
public/katex/**/* linguist-vendored
src/**/*.md linguist-documentation=false

13
.gitignore vendored
View file

@ -19,3 +19,16 @@ pnpm-debug.log*
# macOS-specific files # macOS-specific files
.DS_Store .DS_Store
PragmataPro-Mono-Liga-Regular-Nerd-Font-Complete.woff2
*.agdai
_build
.direnv
/result
.pnpm-store
/public/generated
.frontmatter

3
.prettierignore Normal file
View file

@ -0,0 +1,3 @@
public/katex
pnpm-lock.yaml
src/styles/fork-awesome

4
.tokeignore Normal file
View file

@ -0,0 +1,4 @@
public
package-lock.json
src/styles/fork-awesome
pnpm-lock.yaml

View file

@ -1,4 +1,7 @@
{ {
"recommendations": ["astro-build.astro-vscode"], "recommendations": [
"astro-build.astro-vscode",
"eliostruyf.vscode-front-matter"
],
"unwantedRecommendations": [] "unwantedRecommendations": []
} }

View file

@ -1,19 +1,26 @@
pipeline: steps:
build: build:
image: node:18 image: git.mzhang.io/michael/blog-docker-builder:x0m1c7r2qzc3qbyw8sfv5flfv75xiqdz
environment:
- ASTRO_TELEMETRY_DISABLED=1
commands: commands:
- npm install - mkdir /tmp
- npm run build - rm -rf node_modules
- npm i -g pnpm@9.4.0
- npx pnpm install --frozen-lockfile
- npx pnpm run build
when:
- event: push
deploy: deploy:
image: alpine image: git.mzhang.io/michael/blog-docker-builder:x0m1c7r2qzc3qbyw8sfv5flfv75xiqdz
commands: commands:
- apk add rsync openssh
- echo "$${SSH_SECRET_KEY}" > SSH_SECRET_KEY - echo "$${SSH_SECRET_KEY}" > SSH_SECRET_KEY
- chmod 600 SSH_SECRET_KEY - chmod 600 SSH_SECRET_KEY
- mkdir -p ~/.ssh - mkdir -p ~/.ssh
- echo "mzhang.io ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBzBZ+QmM4EO3Fwc1ZcvWV2IY9VF04T0H9brorGj9Udp" >> ~/.ssh/known_hosts - echo "mzhang.io ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO+RKFYjD8UjqhfOHFHNyijkbGzC4fJEIIhrBwHj+FsQ" >> known_hosts
- rsync -azvrP -e "ssh -i SSH_SECRET_KEY" dist/ sourcehutBuilds@mzhang.io:/mnt/storage/svcdata/blog-public - rsync -azrP -e "ssh -i SSH_SECRET_KEY -o UserKnownHostsFile=known_hosts" dist/ blogDeploy@mzhang.io:/home/blogDeploy/public
secrets: [SSH_SECRET_KEY] secrets: [SSH_SECRET_KEY]
when: when:
branch: master - branch: master
event: push

View file

@ -1,68 +1,5 @@
# Astro Starter Kit: Blog # Michael's Blog
``` https://mzhang.io
npm create astro@latest -- --template blog
```
[![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/github/withastro/astro/tree/latest/examples/blog) License: GPL-3.0 / CC-BY-SA-4.0
[![Open with CodeSandbox](https://assets.codesandbox.io/github/button-edit-lime.svg)](https://codesandbox.io/p/sandbox/github/withastro/astro/tree/latest/examples/blog)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/withastro/astro?devcontainer_path=.devcontainer/blog/devcontainer.json)
> 🧑‍🚀 **Seasoned astronaut?** Delete this file. Have fun!
![blog](https://github.com/withastro/astro/assets/2244813/ff10799f-a816-4703-b967-c78997e8323d)
Features:
- ✅ Minimal styling (make it your own!)
- ✅ 100/100 Lighthouse performance
- ✅ SEO-friendly with canonical URLs and OpenGraph data
- ✅ Sitemap support
- ✅ RSS Feed support
- ✅ Markdown & MDX support
## 🚀 Project Structure
Inside of your Astro project, you'll see the following folders and files:
```
├── public/
├── src/
│   ├── components/
│   ├── content/
│   ├── layouts/
│   └── pages/
├── astro.config.mjs
├── README.md
├── package.json
└── tsconfig.json
```
Astro looks for `.astro` or `.md` files in the `src/pages/` directory. Each page is exposed as a route based on its file name.
There's nothing special about `src/components/`, but that's where we like to put any Astro/React/Vue/Svelte/Preact components.
The `src/content/` directory contains "collections" of related Markdown and MDX documents. Use `getCollection()` to retrieve posts from `src/content/blog/`, and type-check your frontmatter using an optional schema. See [Astro's Content Collections docs](https://docs.astro.build/en/guides/content-collections/) to learn more.
Any static assets, like images, can be placed in the `public/` directory.
## 🧞 Commands
All commands are run from the root of the project, from a terminal:
| Command | Action |
| :------------------------ | :----------------------------------------------- |
| `npm install` | Installs dependencies |
| `npm run dev` | Starts local dev server at `localhost:4321` |
| `npm run build` | Build your production site to `./dist/` |
| `npm run preview` | Preview your build locally, before deploying |
| `npm run astro ...` | Run CLI commands like `astro add`, `astro check` |
| `npm run astro -- --help` | Get help using the Astro CLI |
## 👀 Want to learn more?
Check out [our documentation](https://docs.astro.build) or jump into our [Discord server](https://astro.build/chat).
## Credit
This theme is based off of the lovely [Bear Blog](https://github.com/HermanMartinus/bearblog/).

View file

@ -1,15 +1,57 @@
import { defineConfig } from "astro/config"; import { defineConfig } from "astro/config";
import mdx from "@astrojs/mdx"; import mdx from "@astrojs/mdx";
import sitemap from "@astrojs/sitemap"; import sitemap from "@astrojs/sitemap";
import { remarkReadingTime } from "./plugin/remark-reading-time"; import { rehypeAccessibleEmojis } from "rehype-accessible-emojis";
import remarkReadingTime from "./plugin/remark-reading-time";
import remarkEmoji from "remark-emoji";
import remarkDescription from "astro-remark-description";
import remarkAdmonitions, { mkdocsConfig } from "./plugin/remark-admonitions";
import remarkMath from "remark-math";
import rehypeKatex from "rehype-katex";
import remarkTypst from "./plugin/remark-typst";
import rehypeLinkHeadings from "@justfork/rehype-autolink-headings";
import rehypeSlug from "rehype-slug";
import markdoc from "@astrojs/markdoc";
import remarkAgda from "./plugin/remark-agda";
const outDir = "dist";
const base = process.env.BASE ?? "/";
const publicDir = "public";
// https://astro.build/config // https://astro.build/config
export default defineConfig({ export default defineConfig({
site: "https://example.com", site: "https://mzhang.io",
integrations: [mdx(), sitemap()], prefetch: true,
integrations: [mdx(), sitemap(), markdoc()],
outDir,
base,
trailingSlash: "always",
publicDir,
markdown: { markdown: {
syntaxHighlight: false, syntaxHighlight: "shiki",
remarkPlugins: [remarkReadingTime], shikiConfig: { theme: "css-variables" },
remarkPlugins: [
() => remarkAgda({ outDir, base, publicDir }),
remarkMath,
[remarkAdmonitions, mkdocsConfig],
remarkReadingTime,
remarkTypst,
remarkEmoji,
[
remarkDescription,
{
name: "excerpt",
},
],
],
rehypePlugins: [
[rehypeKatex, {}],
rehypeAccessibleEmojis,
rehypeSlug,
[rehypeLinkHeadings, { behavior: "wrap" }],
],
}, },
}); });

17
biome.json Normal file
View file

@ -0,0 +1,17 @@
{
"$schema": "https://biomejs.dev/schemas/1.7.3/schema.json",
"organizeImports": {
"enabled": true
},
"formatter": {
"enabled": true,
"indentStyle": "space",
"indentWidth": 2
},
"linter": {
"enabled": true,
"rules": {
"recommended": true
}
}
}

2
blog.agda-lib Normal file
View file

@ -0,0 +1,2 @@
include: src/content/posts src
depend: standard-library cubical

130
flake.lock Normal file
View file

@ -0,0 +1,130 @@
{
"nodes": {
"agda": {
"inputs": {
"flake-parts": "flake-parts",
"nixpkgs": "nixpkgs"
},
"locked": {
"lastModified": 1719426791,
"narHash": "sha256-jiPGzJM+XMwb1OwyxpbY6jV0Lzr5yLKnPSMZbueFNRM=",
"owner": "agda",
"repo": "agda",
"rev": "777b07f573c7526d677c18c25e44b24a849bc03b",
"type": "github"
},
"original": {
"owner": "agda",
"repo": "agda",
"type": "github"
}
},
"flake-parts": {
"inputs": {
"nixpkgs-lib": "nixpkgs-lib"
},
"locked": {
"lastModified": 1701473968,
"narHash": "sha256-YcVE5emp1qQ8ieHUnxt1wCZCC3ZfAS+SRRWZ2TMda7E=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "34fed993f1674c8d06d58b37ce1e0fe5eebcb9f5",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
"id": "flake-utils",
"type": "indirect"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1702346276,
"narHash": "sha256-eAQgwIWApFQ40ipeOjVSoK4TEHVd6nbSd9fApiHIw5A=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "cf28ee258fd5f9a52de6b9865cdb93a1f96d09b7",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-lib": {
"locked": {
"dir": "lib",
"lastModified": 1701253981,
"narHash": "sha256-ztaDIyZ7HrTAfEEUt9AtTDNoCYxUdSd6NrRHaYOIxtk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "e92039b55bcd58469325ded85d4f58dd5a4eaf58",
"type": "github"
},
"original": {
"dir": "lib",
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1718089647,
"narHash": "sha256-COO4Xk2EzlZ3x9KCiJildlAA6cYDSPlnY8ms7pKl2Iw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "f7207adcc68d9cafa29e3cd252a18743ae512c6a",
"type": "github"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"root": {
"inputs": {
"agda": "agda",
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs_2"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

34
flake.nix Normal file
View file

@ -0,0 +1,34 @@
{
inputs.agda.url = "github:agda/agda";
outputs = { self, nixpkgs, flake-utils, agda, }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs { inherit system; };
agda-pkg = agda.packages.x86_64-linux.default;
flakePkgs = rec {
agda-bin = pkgs.callPackage ./nix/agda-bin.nix { inherit agda-pkg; };
docker-builder =
pkgs.callPackage ./nix/docker-builder.nix { inherit agda-bin; };
};
in {
packages = flake-utils.lib.flattenTree flakePkgs;
devShell = pkgs.mkShell {
ASTRO_TELEMETRY_DISABLED = 1;
packages = with pkgs;
with flakePkgs; [
bun
woodpecker-cli
nixfmt-rfc-style
dive
nix-tree
vips
shellcheck
agda-bin
nodejs_20
corepack
];
};
});
}

47
frontmatter.json Normal file
View file

@ -0,0 +1,47 @@
{
"$schema": "https://frontmatter.codes/frontmatter.schema.json",
"frontMatter.framework.id": "astro",
"frontMatter.preview.host": "http://localhost:4321",
"frontMatter.content.publicFolder": "public",
"frontMatter.content.pageFolders": [
{
"title": "posts",
"path": "[[workspace]]/src/content/posts"
}
],
"frontMatter.taxonomy.contentTypes": [
{
"name": "default",
"pageBundle": false,
"previewPath": "",
"filePrefix": null,
"clearEmpty": true,
"fields": [
{
"title": "Title",
"name": "title",
"type": "string",
"single": true
},
{
"title": "Description",
"name": "description",
"type": "string"
},
{
"title": "Publishing date",
"name": "date",
"type": "datetime",
"default": "{{now}}",
"isPublishDate": true
},
{
"title": "Content preview",
"name": "heroImage",
"type": "image",
"isPreviewImage": true
}
]
}
]
}

23
nix/agda-bin.nix Normal file
View file

@ -0,0 +1,23 @@
{ agda-pkg, runCommand, writeShellScriptBin, writeTextFile, agdaPackages }:
let
libraryFile =
with agdaPackages;
writeTextFile {
name = "agda-libraries";
text = ''
${agdaPackages.cubical}/cubical.agda-lib
${agdaPackages.standard-library}/standard-library.agda-lib
'';
};
# Add an extra layer of indirection here to prevent all of GHC from being pulled in
wtf = runCommand "agda-bin" { } ''
cp ${agda-pkg}/bin/agda $out
'';
in
writeShellScriptBin "agda" ''
set -euo pipefail
exec ${wtf} --library-file=${libraryFile} $@
''

43
nix/docker-builder.nix Normal file
View file

@ -0,0 +1,43 @@
{ dockerTools
, agda-bin
, corepack
, rsync
, openssh
, bash
, coreutils
, nodejs_20
, gnused
, pkgsLinux
}:
dockerTools.buildLayeredImage {
name = "blog-docker-builder";
contents = with dockerTools; [
agda-bin
corepack
rsync
openssh
bash
coreutils
nodejs_20
gnused
usrBinEnv
caCertificates
fakeNss
];
# fakeRootCommands = ''
# #!${pkgsLinux.runtimeShell}
# ${pkgsLinux.dockerTools.shadowSetup}
# groupadd -r builder
# useradd -r -g builder builder
# '';
}
# copyToRoot = with dockerTools; buildEnv {
# name = "blog-docker-builder-image-root";
# paths = [
# ];
# };

6359
package-lock.json generated

File diff suppressed because it is too large Load diff

View file

@ -2,27 +2,54 @@
"name": "blog", "name": "blog",
"type": "module", "type": "module",
"version": "0.0.1", "version": "0.0.1",
"packageManager": "pnpm@9.4.0+sha256.b6fd0bfda555e7e584ad7e56b30c68b01d5a04f9ee93989f4b93ca8473c49c74",
"scripts": { "scripts": {
"dev": "astro dev", "dev": "astro dev",
"start": "astro dev", "start": "astro dev",
"build": "astro build", "build": "astro build",
"preview": "astro preview", "preview": "astro preview",
"astro": "astro" "astro": "astro",
"format": "prettier -w ."
}, },
"dependencies": { "dependencies": {
"@astrojs/mdx": "^1.0.0", "@astrojs/markdoc": "^0.11.1",
"@astrojs/rss": "^3.0.0", "@astrojs/markdown-remark": "^5.1.1",
"@astrojs/sitemap": "^3.0.0", "@astrojs/mdx": "^1.1.5",
"astro": "^3.0.3", "@astrojs/rss": "^4.0.7",
"@astrojs/sitemap": "^3.1.6",
"@justfork/rehype-autolink-headings": "^5.1.1",
"astro": "^3.6.5",
"astro-imagetools": "^0.9.0",
"astro-remark-description": "^1.1.2",
"classnames": "^2.5.1",
"fork-awesome": "^1.2.0", "fork-awesome": "^1.2.0",
"katex": "^0.16.10",
"lodash-es": "^4.17.21", "lodash-es": "^4.17.21",
"mdast-util-to-string": "^4.0.0", "mdast-util-to-string": "^4.0.0",
"reading-time": "^1.5.0" "nanoid": "^4.0.2",
"reading-time": "^1.5.0",
"rehype-accessible-emojis": "^0.3.2",
"rehype-katex": "^6.0.3",
"remark-emoji": "^4.0.1",
"remark-github-beta-blockquote-admonitions": "^2.2.1",
"remark-math": "^5.1.1",
"remark-parse": "^10.0.2",
"sharp": "^0.33.4"
}, },
"devDependencies": { "devDependencies": {
"@types/lodash-es": "^4.17.9", "@biomejs/biome": "^1.8.2",
"prettier": "^3.0.3", "@types/lodash-es": "^4.17.12",
"prettier-plugin-astro": "^0.12.0", "date-fns": "^2.30.0",
"sass": "^1.66.1" "hast-util-from-html": "^2.0.1",
"hast-util-to-html": "^9.0.1",
"mdast": "^3.0.0",
"mdast-util-from-markdown": "^2.0.1",
"prettier": "^3.3.2",
"prettier-plugin-astro": "^0.12.3",
"rehype-slug": "^6.0.0",
"sass": "^1.77.6",
"shiki": "^0.14.7",
"unified": "^11.0.5",
"unist-util-visit": "^5.0.0"
} }
} }

View file

@ -0,0 +1,155 @@
// https://github.com/myl7/remark-github-beta-blockquote-admonitions
// License: Apache-2.0
import { visit } from "unist-util-visit";
import type { Data } from "unist";
import type { BuildVisitor } from "unist-util-visit";
import type { Blockquote, Paragraph, Text } from "mdast";
import type { RemarkPlugin } from "@astrojs/markdown-remark";
import classNames from "classnames";
const remarkAdmonitions: RemarkPlugin =
(providedConfig?: Partial<Config>) => (tree) => {
visit(tree, handleNode({ ...defaultConfig, ...providedConfig }));
};
export default remarkAdmonitions;
const handleNode =
(config: Config): BuildVisitor =>
(node) => {
// Filter required elems
if (node.type !== "blockquote") return;
const blockquote = node as Blockquote;
if (blockquote.children[0]?.type !== "paragraph") return;
const paragraph = blockquote.children[0];
if (paragraph.children[0]?.type !== "text") return;
const text = paragraph.children[0];
// A link break after the title is explicitly required by GitHub
const titleEnd = text.value.indexOf("\n");
if (titleEnd < 0) return;
const textBody = text.value.substring(titleEnd + 1);
let title = text.value.substring(0, titleEnd);
// Handle whitespaces after the title.
// Whitespace characters are defined by GFM
const m = /[ \t\v\f\r]+$/.exec(title);
if (m && !config.titleKeepTrailingWhitespaces) {
title = title.substring(0, title.length - m[0].length);
}
if (!nameFilter(config.titleFilter)(title)) return;
const { displayTitle, checkedTitle } = config.titleTextMap(title);
// Update the text body
text.value = textBody;
// Insert the title element and add classes for the title
const paragraphTitleText: Text = { type: "text", value: displayTitle };
const paragraphTitle: Paragraph = {
type: "paragraph",
children: [paragraphTitleText],
data: config.dataMaps.title({
hProperties: {
className: classNameMap(config.classNameMaps.title)(checkedTitle),
},
}),
};
blockquote.children.unshift(paragraphTitle);
// Add classes for the block
blockquote.data = config.dataMaps.block({
...blockquote.data,
hProperties: {
className: classNameMap(config.classNameMaps.block)(checkedTitle),
},
// The blockquote should be rendered as a div, which is explicitly required by GitHub
hName: "div",
});
};
const TITLE_PATTERN =
/\[\!admonition: (attention|caution|danger|error|hint|important|note|tip|warning)\]/i;
export const mkdocsConfig: Partial<Config> = {
classNameMaps: {
block: (title) => [
"admonition",
...(title.startsWith("admonition: ")
? title.substring("admonition: ".length)
: title
).split(" "),
],
title: classNames("admonition-title"),
},
titleFilter: (title) => Boolean(title.match(TITLE_PATTERN)),
titleTextMap: (title: string) => {
const match = title.match(TITLE_PATTERN);
const displayTitle = match?.[1] ?? "";
const checkedTitle = displayTitle;
return { displayTitle, checkedTitle };
},
};
export interface Config {
classNameMaps: {
block: ClassNameMap;
title: ClassNameMap;
};
titleFilter: NameFilter;
titleTextMap: (title: string) => {
displayTitle: string;
checkedTitle: string;
};
dataMaps: {
block: (data: Data) => Data;
title: (data: Data) => Data;
};
titleKeepTrailingWhitespaces: boolean;
legacyTitle: boolean;
}
export const defaultConfig: Config = {
classNameMaps: {
block: "admonition",
title: "admonition-title",
},
titleFilter: ["[!NOTE]", "[!IMPORTANT]", "[!WARNING]"],
titleTextMap: (title) => ({
displayTitle: title.substring(2, title.length - 1),
checkedTitle: title.substring(2, title.length - 1),
}),
dataMaps: {
block: (data) => data,
title: (data) => data,
},
titleKeepTrailingWhitespaces: false,
legacyTitle: false,
};
type ClassNames = string | string[];
type ClassNameMap = ClassNames | ((title: string) => ClassNames);
export function classNameMap(gen: ClassNameMap) {
return (title: string) => {
const classNames = typeof gen === "function" ? gen(title) : gen;
return typeof classNames === "object" ? classNames.join(" ") : classNames;
};
}
type NameFilter = ((title: string) => boolean) | string[];
export function nameFilter(filter: NameFilter) {
return (title: string) => {
return typeof filter === "function"
? filter(title)
: filter.includes(title);
};
}

178
plugin/remark-agda.ts Normal file
View file

@ -0,0 +1,178 @@
import type { RootContent } from "hast";
import { fromMarkdown } from "mdast-util-from-markdown";
import { fromHtml } from "hast-util-from-html";
import { toHtml } from "hast-util-to-html";
import { spawnSync } from "node:child_process";
import {
mkdirSync,
mkdtempSync,
readFileSync,
existsSync,
readdirSync,
copyFileSync,
writeFileSync,
} from "node:fs";
import { tmpdir } from "node:os";
import { join, parse, resolve, basename } from "node:path";
import { visit } from "unist-util-visit";
import type { RemarkPlugin } from "@astrojs/markdown-remark";
interface Options {
base: string;
outDir: string;
publicDir: string;
}
const remarkAgda: RemarkPlugin = ({ base, publicDir }: Options) => {
const destDir = join(publicDir, "generated", "agda");
mkdirSync(destDir, { recursive: true });
return (tree, { history }) => {
const path: string = history[history.length - 1]!;
if (!(path.endsWith(".lagda.md") || path.endsWith(".agda"))) return;
console.log("AGDA:processing path", path);
const tempDir = mkdtempSync(join(tmpdir(), "agdaRender."));
const agdaOutDir = join(tempDir, "output");
mkdirSync(agdaOutDir, { recursive: true });
const childOutput = spawnSync(
"agda",
[
"--html",
`--html-dir=${agdaOutDir}`,
"--highlight-occurrences",
"--html-highlight=code",
path,
],
{},
);
// Locate output file
const directory = readdirSync(agdaOutDir);
const outputFilename = directory.find((name) => name.endsWith(".md"));
if (childOutput.status !== 0 || outputFilename === undefined) {
throw new Error(
`Agda output:
Stdout:
${childOutput.stdout}
Stderr:
${childOutput.stderr}
`,
{
cause: childOutput.error,
},
);
}
const outputFile = join(agdaOutDir, outputFilename);
// // TODO: Handle child output
// console.error("--AGDA OUTPUT--");
// console.error(childOutput);
// console.error(childOutput.stdout?.toString());
// console.error(childOutput.stderr?.toString());
// console.error("--AGDA OUTPUT--");
const referencedFiles = new Set();
for (const file of readdirSync(agdaOutDir)) {
referencedFiles.add(file);
const fullPath = join(agdaOutDir, file);
const fullDestPath = join(destDir, file);
if (file.endsWith(".html")) {
const src = readFileSync(fullPath);
writeFileSync(
fullDestPath,
`
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="${base}generated/agda/Agda.css" />
</head>
<body>
<pre class="Agda">
${src}
</pre>
</body>
</html>
`,
);
} else {
copyFileSync(fullPath, fullDestPath);
}
}
const htmlname = basename(resolve(outputFile)).replace(
/(\.lagda)?\.md/,
".html",
);
const doc = readFileSync(outputFile);
// This is the post-processed markdown with HTML code blocks replacing the Agda code blocks
const tree2 = fromMarkdown(doc);
const collectedCodeBlocks: RootContent[] = [];
visit(tree2, "html", (node) => {
const html = fromHtml(node.value, { fragment: true });
const firstChild: RootContent = html.children[0]!;
visit(html, "element", (node) => {
if (node.tagName !== "a") return;
if (node.properties.href) {
// Trim off end
const [href, hash, ...rest] = node.properties.href.split("#");
if (rest.length > 0) throw new Error("come look at this");
if (href === htmlname) {
node.properties.href = `#${hash}`;
}
if (referencedFiles.has(href)) {
node.properties.href = `${base}generated/agda/${href}${hash ? `#${hash}` : ""}`;
node.properties.target = "_blank";
}
}
});
if (!firstChild?.properties?.className?.includes("Agda")) return;
const stringContents = toHtml(firstChild);
collectedCodeBlocks.push({
contents: stringContents,
});
});
let idx = 0;
try {
visit(tree, "code", (node) => {
if (!(node.lang === null || node.lang === "agda")) return;
if (idx > collectedCodeBlocks.length) {
throw new Error("failed");
}
node.type = "html";
node.value = collectedCodeBlocks[idx].contents;
idx += 1;
});
} catch (e) {
// TODO: Figure out a way to handle this correctly
// Possibly by diffing?
console.log(
"Mismatch in number of args. Perhaps there was an empty block?",
);
}
};
};
export default remarkAgda;

View file

@ -1,12 +1,16 @@
import getReadingTime from "reading-time"; import getReadingTime from "reading-time";
import { toString } from "mdast-util-to-string"; import { toString } from "mdast-util-to-string";
import type { RemarkPlugin } from "@astrojs/markdown-remark";
export function remarkReadingTime() { const remarkReadingTime: RemarkPlugin = () => {
return function (tree, { data }) { return (tree, { data }) => {
const textOnPage = toString(tree); const textOnPage = toString(tree);
const readingTime = getReadingTime(textOnPage); const readingTime = getReadingTime(textOnPage);
// readingTime.text will give us minutes read as a friendly string, // readingTime.text will give us minutes read as a friendly string,
// i.e. "3 min read" // i.e. "3 min read"
data.astro.frontmatter.minutesRead = readingTime.text; data.astro.frontmatter.minutesRead = readingTime.text;
}; };
} };
export default remarkReadingTime;

58
plugin/remark-typst.ts Normal file
View file

@ -0,0 +1,58 @@
import type { RemarkPlugin } from "@astrojs/markdown-remark";
import { mkdtempSync, readFileSync, writeFileSync } from "node:fs";
import { visit } from "unist-util-visit";
import { join, resolve, dirname } from "node:path";
import { spawnSync } from "node:child_process";
import { tmpdir } from "node:os";
import { readdir, copyFile } from "node:fs/promises";
const remarkTypst: RemarkPlugin = () => {
const tmp = mkdtempSync(join(tmpdir(), "typst"));
let ctr = 0;
return async (tree, { history }) => {
const path: string = resolve(history[history.length - 1]!);
const dir = dirname(path);
// Copy all the .typ files in the dir
for (const file of await readdir(dir)) {
console.log("checking", file);
if (file.endsWith(".typ")) {
const src = resolve(join(dir, file));
const dst = resolve(join(tmp, file));
console.log("copying", src, dst);
await copyFile(src, dst);
}
}
visit(
tree,
(node) => node.type === "code" && node.lang === "typst",
(node, index, parent) => {
const doc = resolve(join(tmp, `${ctr}.typ`));
const docOut = resolve(join(tmp, `${ctr}.svg`));
ctr += 1;
writeFileSync(doc, node.value);
const result = spawnSync("typst", [
"compile",
"--format",
"svg",
doc,
docOut,
]);
console.log("OUTPUT", result.stderr.toString());
const svgOut = readFileSync(docOut);
node.type = "html";
node.value = `
<div class="svg">
${svgOut.toString()}
</div>
`;
},
);
};
};
export default remarkTypst;

7195
pnpm-lock.yaml Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1 @@
did:plc:zbg2asfcpyughqspwjjgyc2d

View file

@ -0,0 +1,2 @@
User-agent: *
Allow: /

13
scripts/upload-builder.sh Executable file
View file

@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
nix build .#docker-builder
IMAGE_NAME=$(docker load -q -i ./result | cut -d':' -f2,3 | xargs)
REMOTE_IMAGE_NAME="git.mzhang.io/michael/$IMAGE_NAME"
docker image tag "$IMAGE_NAME" "$REMOTE_IMAGE_NAME"
sed -i -E "s~(.*image: ).*blog-docker-builder:?.*~\1$REMOTE_IMAGE_NAME~" .woodpecker.yml
echo "Created $REMOTE_IMAGE_NAME"
docker push -q "$REMOTE_IMAGE_NAME"

95
src/Prelude.agda Normal file
View file

@ -0,0 +1,95 @@
{-# OPTIONS --cubical-compatible #-}
module Prelude where
open import Agda.Primitive
private
variable
l : Level
data : Set where
rec-⊥ : {A : Set} A
rec-⊥ ()
¬_ : Set Set
¬ A = A
data : Set where
tt :
data Bool : Set where
true : Bool
false : Bool
id : {l : Level} {A : Set l} A A
id x = x
module Nat where
data : Set where
zero :
suc :
{-# BUILTIN NATURAL #-}
infixl 6 _+_
_+_ :
zero + n = n
suc m + n = suc (m + n)
open Nat public
infix 4 _≡_
data _≡_ {l} {A : Set l} : (a b : A) Set l where
instance refl : {x : A} x x
{-# BUILTIN EQUALITY _≡_ #-}
ap : {A B : Set l} (f : A B) {x y : A} x y f x f y
ap f refl = refl
sym : {A : Set l} {x y : A} x y y x
sym refl = refl
trans : {A : Set l} {x y z : A} x y y z x z
trans refl refl = refl
infixl 10 _∙_
_∙_ = trans
transport : {l₁ l₂ : Level} {A : Set l₁} {x y : A}
(P : A Set l₂)
(p : x y)
P x P y
transport {l₁} {l₂} {A} {x} {y} P refl = id
infix 4 _≢_
_≢_ : {A : Set} A A Set
x y = ¬ (x y)
module dependent-product where
infixr 4 _,_
infixr 2 _×_
record Σ {l₁ l₂} (A : Set l₁) (B : A Set l₂) : Set (l₁ l₂) where
constructor _,_
field
fst : A
snd : B fst
open Σ
{-# BUILTIN SIGMA Σ #-}
syntax Σ A (λ x B) = Σ[ x A ] B
_×_ : {l : Level} (A B : Set l) Set l
_×_ A B = Σ A (λ _ B)
open dependent-product public
_∘_ : {A B C : Set} (g : B C) (f : A B) A C
(g f) a = g (f a)
__ : {A B : Set} (f g : A B) Set
__ {A} f g = (x : A) f x g x
postulate
funExt : {l : Level} {A B : Set l}
{f g : A B}
((x : A) f x g x)
f g

Binary file not shown.

Before

Width:  |  Height:  |  Size: 710 KiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View file

@ -5,12 +5,12 @@ import "../styles/footer.scss";
<footer> <footer>
<p> <p>
Blog code licensed under <a href="https://www.gnu.org/licenses/gpl-3.0.txt" target="_blank" Blog code licensed under <a href="https://www.gnu.org/licenses/gpl-3.0.txt" target="_blank"
>[GPL-3.0]</a >GPL-3.0</a
>. Post contents licensed under <a >. Post contents licensed under <a
href="https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt">[CC BY-SA 4.0]</a href="https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt">CC BY-SA 4.0</a
>. >.
<br /> <br />
Written by Michael Zhang. Written by Michael Zhang.
<a href="https://git.mzhang.io/michael/blog" class="colorlink" target="_blank">[Source]</a>. <a href="https://git.mzhang.io/michael/blog" class="colorlink" target="_blank">Source</a>.
</p> </p>
</footer> </footer>

View file

@ -0,0 +1,7 @@
---
interface Props {}
const {} = Astro.props;
console.log("SHIET", Astro.props);
console.log("SHIET", await Astro.slots.render("default"));
---

View file

@ -4,15 +4,21 @@ import { Content as ShortBio } from "../content/partials/shortBio.md";
import links from "../data/links"; import links from "../data/links";
import { Image } from "astro:assets"; import { Image } from "astro:assets";
import portrait from "../assets/self.png"; import portrait from "../assets/self.png";
const target = Astro.url.pathname === "/" ? "/about/" : "/";
--- ---
<nav class="side-nav"> <nav class="side-nav">
<div class="side-nav-content"> <div class="side-nav-content">
<a href="/" class="portrait"> <a href={target} class="portrait" data-astro-prefetch>
<Image src={portrait} alt="portrait" class="portrait" /> <Image src={portrait} alt="portrait" class="portrait" />
</a> </a>
<div class="me"> <div class="me">
<h1 class="title">Michael Zhang</h1> <div class="titleContainer">
<h1 class="title">
<a href={target} data-astro-prefetch>Michael Zhang</a>
</h1>
</div>
<div class="links"> <div class="links">
{ {
links.map((link) => { links.map((link) => {
@ -28,7 +34,7 @@ import portrait from "../assets/self.png";
<div class="bio"> <div class="bio">
<ShortBio /> <ShortBio />
<a href="/about">More &raquo;</a> <a href="/about/">More &raquo;</a>
</div> </div>
</div> </div>
</nav> </nav>

View file

@ -2,35 +2,75 @@
import { getCollection, type CollectionEntry } from "astro:content"; import { getCollection, type CollectionEntry } from "astro:content";
import Timestamp from "./Timestamp.astro"; import Timestamp from "./Timestamp.astro";
import { sortBy } from "lodash-es"; import { sortBy } from "lodash-es";
import TagList from "./TagList.astro";
interface Props { interface Props {
basePath: string; basePath: string;
includeDrafts?: boolean; /** Whether or not to include drafts in this list */
drafts?: "exclude" | "include" | "only";
filteredPosts?: Post[];
class?: string | undefined;
timeFormat?: string | undefined;
collection?: string;
} }
type Post = CollectionEntry<"posts">; type Post = CollectionEntry<"posts">;
const { basePath, includeDrafts } = Astro.props; const {
const filter = includeDrafts ? (_: Post) => true : (post: Post) => !post.data.draft; class: className,
const allPosts = await getCollection("posts", filter); basePath,
drafts: includeDrafts,
filteredPosts,
timeFormat,
collection,
} = Astro.props;
type FilterFn = (_: Post) => boolean;
function unreachable(): never {
throw new Error("unreachable");
}
function getFilter(): FilterFn {
switch (includeDrafts) {
case "exclude":
case undefined:
return (post: Post) => !post.data.draft;
case "include":
return (_: Post) => true;
case "only":
return (post: Post) => post.data.draft === true;
}
return unreachable();
}
const filter = getFilter();
let allPosts;
if (filteredPosts) allPosts = filteredPosts.filter(filter);
else allPosts = await getCollection(collection ?? "posts", filter);
const sortedPosts = sortBy(allPosts, (post) => -post.data.date); const sortedPosts = sortBy(allPosts, (post) => -post.data.date);
--- ---
<table class="postListing"> <table class={`postListing ${className}`}>
{ {
sortedPosts.map((post) => { sortedPosts.map((post) => {
return ( return (
<>
<tr class="row"> <tr class="row">
<td class="info">
<Timestamp timestamp={post.data.date} />
</td>
<td> <td>
<span class="title"> <div class="title">
<a href={`${basePath}/${post.slug}`} class="brand-colorlink"> <a href={`${basePath}/${post.slug}/`} class="brand-colorlink">
{post.data.title} {post.data.title}
</a> </a>
</span> </div>
<div>
<Timestamp timestamp={post.data.date} format={timeFormat} />
&middot;
<TagList tags={post.data.tags} />
</div>
</td> </td>
</tr> </tr>
</>
); );
}) })
} }
@ -39,14 +79,32 @@ const sortedPosts = sortBy(allPosts, (post) => -post.data.date);
<style lang="scss"> <style lang="scss">
.postListing { .postListing {
width: 100%; width: 100%;
border-spacing: 6px; border-spacing: 0px 24px;
:global(.timestamp) {
font-family: var(--monofont);
color: var(--smaller-text-color);
font-size: 0.75em;
}
:global(.tags) {
gap: 8px;
display: inline-flex;
:global(.tag) {
background-color: inherit;
padding: 0;
}
}
td { td {
// padding-bottom: 10px; line-height: 1;
// line-height: 1; display: flex;
flex-direction: column;
gap: 4px;
.title { .title {
font-size: 1.1em; font-size: 12pt;
} }
.summary { .summary {
@ -65,6 +123,11 @@ const sortedPosts = sortBy(allPosts, (post) => -post.data.date);
font-size: 0.75em; font-size: 0.75em;
white-space: nowrap; white-space: nowrap;
text-align: right; text-align: right;
vertical-align: baseline;
.spacer {
font-size: 12pt;
}
} }
} }
</style> </style>

View file

@ -0,0 +1,18 @@
---
import "../styles/tagList.scss";
const { extraFront, tags, extraBack } = Astro.props;
---
<div class="tags">
{extraFront}
{
tags.toSorted().map((tag: string) => (
<a href={`/tags/${tag}/`} class="tag">
<i class="fa fa-tag" aria-hidden="true" />
<span class="text">#{tag}</span>
</a>
))
}
{extraBack}
</div>

View file

@ -1,14 +1,13 @@
--- ---
interface Props { interface Props {
timestamp: Date; timestamp: Date;
format?: string | undefined;
} }
const { timestamp } = Astro.props; const { timestamp, format: customFormat } = Astro.props;
const datestamp = timestamp.toLocaleDateString(undefined, { import { format } from "date-fns";
year: "numeric",
month: "short",
day: "numeric",
});
--- ---
<span title={timestamp.toISOString()}>{datestamp}</span> <span class="timestamp" title={timestamp.toISOString()}>
{format(timestamp, customFormat ?? "yyyy MMM dd")}
</span>

View file

@ -0,0 +1,128 @@
---
import type { MarkdownHeading } from "astro";
import "../styles/toc.scss";
interface Props {
toc: boolean;
headings: MarkdownHeading[];
}
const { toc, headings } = Astro.props;
const minDepth = Math.min(...headings.map((heading) => heading.depth));
---
{
toc ? (
<>
<div class="toc-wrapper">
<slot />
<div class="toc">
<h3 class="title">Table of contents</h3>
<ul>
{headings.map((heading) => {
const depth = heading.depth - minDepth;
const padding = 10 * Math.pow(0.85, depth);
const fontSize = 14 * Math.pow(0.9, depth);
return (
<li>
<a
href={`#${heading.slug}`}
id={`${heading.slug}-link`}
style={`padding: ${padding}px;`}
>
<span style={`padding-left: ${depth * 10}px; font-size: ${fontSize}px;`}>
{heading.text}
</span>
</a>
</li>
);
})}
</ul>
</div>
</div>
</>
) : (
<slot />
)
}
<script define:vars={{ toc, headings }}>
if (toc) {
const headingTags = new Set(["h1", "h2", "h3", "h4", "h5", "h6"]);
const headingsMap = new Map([...headings.map((heading) => [heading.slug, new Set()])]);
const reverseHeadingMap = new Map();
const linkMap = new Map();
document.addEventListener("DOMContentLoaded", function () {
const visibleElements = new Map();
// Register links
for (const heading of headings) {
const link = document.getElementById(`${heading.slug}-link`);
const el = document.getElementById(heading.slug);
if (link && el) {
linkMap.set(heading.slug, link);
link.addEventListener("click", (e) => {
el.scrollIntoView({ behavior: "smooth" });
e.preventDefault();
});
}
if (!visibleElements.has(heading.slug)) {
visibleElements.set(heading.slug, new Set());
}
}
const observer = new IntersectionObserver((entries) => {
for (const entry of entries) {
const target = entry.target;
const slug = reverseHeadingMap.get(target);
const link = linkMap.get(slug);
const associatedEls = visibleElements.get(slug);
if (entry.isIntersecting) {
// if it wasn't previously visible
// let's make the link active
if (associatedEls.size === 0) {
link.parentNode.classList.add("active");
}
associatedEls.add(target);
} else {
// if it was previously visible
// check if it's the last element
if (associatedEls.size > 0) {
if (associatedEls.size === 1) link.parentNode.classList.remove("active");
}
if (associatedEls.size > 0) {
associatedEls.delete(target);
}
ratioMap.delete(target);
}
}
});
const postContentEl = document.getElementById("post-content");
let belongsTo;
for (const child of postContentEl.children) {
if (headingTags.has(child.tagName.toLowerCase())) {
belongsTo = child.id;
}
if (belongsTo) {
const headingSet = headingsMap.get(belongsTo);
headingSet.add(child);
reverseHeadingMap.set(child, belongsTo);
}
}
[...headingsMap.values()]
.flatMap((x) => [...x])
.forEach((x) => {
observer.observe(x);
});
});
}
</script>

View file

@ -1,5 +1,5 @@
// Place any global data in this file. // Place any global data in this file.
// You can import this data from anywhere in your site by using the `import` keyword. // You can import this data from anywhere in your site by using the `import` keyword.
export const SITE_TITLE = 'Astro Blog'; export const SITE_TITLE = "Astro Blog";
export const SITE_DESCRIPTION = 'Welcome to my website!'; export const SITE_DESCRIPTION = "Welcome to my website!";

View file

@ -4,7 +4,8 @@ const posts = defineCollection({
type: "content", type: "content",
// Type-check frontmatter using a schema // Type-check frontmatter using a schema
schema: z.object({ schema: ({ image }) =>
z.object({
title: z.string(), title: z.string(),
date: z.date(), date: z.date(),
@ -20,8 +21,13 @@ const posts = defineCollection({
.transform((str) => (str ? new Date(str) : undefined)) .transform((str) => (str ? new Date(str) : undefined))
.optional(), .optional(),
tags: z.array(z.string()), heroImage: image().optional(),
heroAlt: z.string().optional(),
tags: z.array(z.string()).default([]),
draft: z.boolean().default(false), draft: z.boolean().default(false),
math: z.boolean().default(false),
toc: z.boolean().default(false),
}), }),
}); });

View file

@ -1,8 +1,15 @@
I'm a masters student at the University of Minnesota advised by [Favonia]. I I'm a computer science master's student at the [University of Minnesota][1], advised by [Favonia].
previously worked as a Software Developer at [AWS] and [SIFT]. My My current research topic is in cubical type theory and formalization of the Serre spectral sequence.
computing-related interests lie in programming language design and formal
verification, systems security, cryptography, and distributed systems.
I've also worked as a researcher for [SIFT], specializing in compilers and binary analysis. Previously I worked as a software engineer at [Swoop Search] and [AWS].
Before that, I was a CTF hobbyist. I created [EasyCTF], a cybersecurity competition for high schoolers.
I also briefly played with the CTF team [Project Sekai][pjsk].
[1]: https://twin-cities.umn.edu/
[Swoop Search]: https://swoopsrch.com/
[aws]: https://aws.amazon.com/ [aws]: https://aws.amazon.com/
[sift]: https://www.sift.net/ [sift]: https://www.sift.net/
[favonia]: https://favonia.org/ [favonia]: https://favonia.org/
[easyctf]: https://www.easyctf.com/
[pjsk]: https://sekai.team/

View file

@ -8,7 +8,9 @@ Today, many companies claim to provide "end-to-end encryption" of user data,
whether it be text messages, saved pictures, or important documents. But what whether it be text messages, saved pictures, or important documents. But what
does this actually mean for your data? I'll explain what "non-end-to-end" does this actually mean for your data? I'll explain what "non-end-to-end"
encryption is, why end-to-end encryption is important, and also when it might encryption is, why end-to-end encryption is important, and also when it might
be absolutely meaningless.<!--more--> be absolutely meaningless.
<!--more-->
> If you just want to read about end-to-end encryption, click [here][1]. > If you just want to read about end-to-end encryption, click [here][1].
> Otherwise, I'll start the story all the way back to how computers talk to > Otherwise, I'll start the story all the way back to how computers talk to
@ -89,7 +91,7 @@ then decrypts it offline. [Signal][signal] famously provides end-to-end
encrypted chat, so that no one, not even the government[^1], will be able to encrypted chat, so that no one, not even the government[^1], will be able to
read the messages you send if they're not the intended recipient. read the messages you send if they're not the intended recipient.
## It's still not enough {#not-enough} ## It's still not enough
End-to-end encryption seems like it should be the end of the story, but if End-to-end encryption seems like it should be the end of the story, but if
there's one thing that can undermine the encryption, it's the program that's there's one thing that can undermine the encryption, it's the program that's
@ -195,7 +197,7 @@ control.
[PKI][pki] infrastructure to solve this, which relies on a certificate chain [PKI][pki] infrastructure to solve this, which relies on a certificate chain
that is distributed by browser or operating system vendors. that is distributed by browser or operating system vendors.
[1]: {{< ref "#not-enough" >}} [1]: #its-still-not-enough
[csam]: https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf [csam]: https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf
[ferpa]: https://en.wikipedia.org/wiki/Family_Educational_Rights_and_Privacy_Act [ferpa]: https://en.wikipedia.org/wiki/Family_Educational_Rights_and_Privacy_Act

View file

@ -4,7 +4,7 @@ date: 2023-03-29
tags: ["docker", "linux"] tags: ["docker", "linux"]
--- ---
First (published) blog post of the year! :raising_hands: First (published) blog post of the year! :raised_hands:
Here is a rather dumb way of entering a Docker Compose container that didn't Here is a rather dumb way of entering a Docker Compose container that didn't
have a shell. In this specific case, I was trying to enter a Woodpecker CI have a shell. In this specific case, I was trying to enter a Woodpecker CI
@ -12,6 +12,7 @@ container without exiting it. Some Docker containers are incredibly stripped
down to optimize away bloat (which is good!) but this may make debugging them down to optimize away bloat (which is good!) but this may make debugging them
relatively annoying. relatively annoying.
> [!admonition: NOTE]
> These are my specific steps for running it, please replace the paths and > These are my specific steps for running it, please replace the paths and
> container names with the ones relevant to your specific use-case. > container names with the ones relevant to your specific use-case.

View file

@ -61,6 +61,7 @@ I shove new config files in their root directory.
 flake.nix ✗  flake.nix ✗
``` ```
> [!admonition: NOTE]
> The `✗` indicates that I added the file to the project, and it hasn't been > The `✗` indicates that I added the file to the project, and it hasn't been
> committed to the repo yet. > committed to the repo yet.
@ -130,9 +131,10 @@ project structure should look a bit more like this:
 flake.nix  flake.nix
``` ```
> Remember, since you moved the `.envrc` file, you will need to run `direnv > [!admonition: NOTE]
allow` again. Depending on how you moved it, you might also need to change the > Remember, since you moved the `.envrc` file, you will need to run `direnv allow`
> path you wrote in the `use flake` command. > again. Depending on how you moved it, you might also need to change the path you
> wrote in the `use flake` command.
With this setup, the `project` directory can contain a clean clone of upstream With this setup, the `project` directory can contain a clean clone of upstream
and your flake files will create the appropriate environment. and your flake files will create the appropriate environment.

View file

@ -1,5 +1,5 @@
--- ---
title : "Formally proving true ≢ false in cubical Agda" title: "Formally proving true ≢ false in Homotopy Type Theory with Agda"
slug: "proving-true-from-false" slug: "proving-true-from-false"
date: 2023-04-21 date: 2023-04-21
tags: ["type-theory", "agda"] tags: ["type-theory", "agda"]
@ -12,20 +12,9 @@ math : true
These are some imports that are required for code on this page to work properly. These are some imports that are required for code on this page to work properly.
```agda ```agda
{-# OPTIONS --cubical #-} open import Prelude
open import Cubical.Foundations.Prelude
open import Data.Bool
open import Data.Unit
open import Data.Empty
¬_ : Set → Set
¬ A = A → ⊥
infix 4 _≢_
_≢_ : ∀ {A : Set} → A → A → Set
x ≢ y = ¬ (x ≡ y)
``` ```
</details> </details>
The other day, I was trying to prove `true ≢ false` in Agda. I would write the The other day, I was trying to prove `true ≢ false` in Agda. I would write the
@ -39,7 +28,6 @@ For many "obvious" statements, it suffices to just write `refl` since the two
sides are trivially true via rewriting. For example: sides are trivially true via rewriting. For example:
``` ```
open import Data.Nat
1+2≡3 : 1 + 2 ≡ 3 1+2≡3 : 1 + 2 ≡ 3
1+2≡3 = refl 1+2≡3 = refl
``` ```
@ -60,9 +48,8 @@ left side so it becomes judgmentally equal to the right:
- suc (suc (suc zero)) - suc (suc (suc zero))
- 3 - 3
However, in cubical Agda, naively using `refl` with the inverse statement However, in Agda, naively using `refl` with the inverse statement doesn't work.
doesn't work. I've commented it out so the code on this page can continue to I've commented it out so the code on this page can continue to compile.
compile.
``` ```
-- true≢false = refl -- true≢false = refl
@ -95,7 +82,7 @@ The strategy here is we define some kind of "type-map". Every time we see
`false`, we'll map it to empty. `false`, we'll map it to empty.
``` ```
bool-map : Bool → Type bool-map : Bool → Set
bool-map true = bool-map true =
bool-map false = ⊥ bool-map false = ⊥
``` ```
@ -105,26 +92,29 @@ over a path (the path supposedly given to us as the witness that true ≢ false)
will produce a function from the inhabited type we chose to the empty type! will produce a function from the inhabited type we chose to the empty type!
``` ```
true≢false p = transport (λ i → bool-map (p i)) tt true≢false p = transport bool-map p tt
``` ```
I used `true` here, but I could equally have just used anything else: I used `true` here, but I could equally have just used anything else:
``` ```
bool-map2 : Bool → Type bool-map2 : Bool → Set
bool-map2 true = 1 ≡ 1 bool-map2 true = 1 ≡ 1
bool-map2 false = ⊥ bool-map2 false = ⊥
true≢false2 : true ≢ false true≢false2 : true ≢ false
true≢false2 p = transport (λ i → bool-map2 (p i)) refl true≢false2 p = transport bool-map2 p refl
``` ```
## Note on proving divergence on equivalent values ## Note on proving divergence on equivalent values
Let's make sure this isn't broken by trying to apply this to something that's > [!admonition: NOTE]
actually true: > Update: some of these have been commented out since regular Agda doesn't support higher inductive types
``` Let's make sure this isn't broken by trying to apply this to something that's
actually true, like this higher inductive type:
```text
data NotBool : Type where data NotBool : Type where
true1 : NotBool true1 : NotBool
true2 : NotBool true2 : NotBool
@ -135,7 +125,7 @@ In this data type, we have a path over `true1` and `true2` that is a part of the
definition of the `NotBool` type. Since this is an intrinsic equality, we can't definition of the `NotBool` type. Since this is an intrinsic equality, we can't
map `true1` and `true2` to divergent types. Let's see what happens: map `true1` and `true2` to divergent types. Let's see what happens:
``` ```text
notbool-map : NotBool → Type notbool-map : NotBool → Type
notbool-map true1 = notbool-map true1 =
notbool-map true2 = ⊥ notbool-map true2 = ⊥

View file

@ -58,6 +58,7 @@ we can just give $y$ again, and use the `refl` function above for the equality
proof proof
``` ```
``` ```
The next step is to prove that it's contractible. Using the same derivation for The next step is to prove that it's contractible. Using the same derivation for
@ -168,7 +169,9 @@ Bool-id-is-equiv .equiv-proof y .snd y₁ i .snd j =
``` ```
Blocked on this issue: https://git.mzhang.io/school/cubical/issues/2 Blocked on this issue: https://git.mzhang.io/school/cubical/issues/2
``` ```
``` ```
## Other Equivalences ## Other Equivalences

View file

@ -36,7 +36,7 @@ I'm going to implement this using [Deno].
[deno]: https://deno.land/ [deno]: https://deno.land/
> **&#x1f4a1; This is a literate document.** I wrote a [small utility][3] to > **:bulb: This is a literate document.** I wrote a [small utility][3] to
> extract the code blocks out of markdown files, and it should produce working > extract the code blocks out of markdown files, and it should produce working
> example for this file. If you have the utility, then running the following > example for this file. If you have the utility, then running the following
> should get you a copy of all the code extracted from this blog post: > should get you a copy of all the code extracted from this blog post:

Binary file not shown.

After

Width:  |  Height:  |  Size: 984 KiB

View file

@ -0,0 +1,144 @@
---
title: Thoughts on personal organization
date: 2023-08-31T13:57:29.022Z
tags:
- organization
- logseq
heroImage: ./calendarHero.png
heroAlt: pastel colored stationery background with a bunch of calendars and personal organization tools in a crayon drawing style
---
Many people don't really use a calendar of any sort to manage their lives.
I get it. Putting events into a calendar is kind of a chore. It's a menial relic
from work and none of us want to even think about creating events during our
coveted personal hours. We want to live our lives free from the constraints of
the time boxes on our screens.
On top of that, traditional calendar apps still primarily use email for the most
part (sending invites, updating times, etc.) and the new generation of calendar
apps suffer from the social network problem of having to get everyone on the
same app.
But to some extent, it's still valuable to have things down in writing rather
than juggling it in our minds all the time.
Which is why it's such a shame that the personal management story has always
been kind of fragmented. Calendars are supposed to manage the entire picture of
my personal schedule, yet they only see a small slice of your life. The only
things calendars can see automatically with no intervention on my part are
emails that are sent from airlines.
> [!admonition: NOTE]
> I'm sure Google or Apple could probably ritz up their services to scan text
> and guess events to put on your calendar, but that's missing the point. The vast
> majority of people I associate with rarely coordinate events over email in the
> first place.
## Journals
For a while I've always wanted a kind of personal information manager: something
that would put all my information in one place and make it easy for me to query
across apps. When I embarked on this search I wouldn't have thought that the
most promising tool would end up being a journaling app.
(by journaling app I mean something like [Logseq], [Obsidian], [Notion],
[Workflowy] or [the][roam] [million][joplin] [other][craft]
[similar][stdnotes] [apps][bear] that allow you to write some markdown-ish
content, store it, and then never look back at it again)
[logseq]: https://logseq.com
[obsidian]: https://obsidian.md/
[notion]: https://www.notion.so/
[workflowy]: https://workflowy.com/
[roam]: https://roamresearch.com/
[joplin]: https://joplinapp.org/
[craft]: https://www.craft.do/
[stdnotes]: https://standardnotes.com/
[bear]: https://bear.app/
The world of journaling apps is vast but relatively undiverse. Most of the apps
just have the same features others do, minus one or two gimmicks that makes it a
ride or die. But there's one important feature that I have started looking out
for recently: the ability to attach arbitrary metadata to journal entries and be
able to query for them.
While new apps have been cropping up from time to time for a while now, I think
a common trend that's starting to emerge is that these "journals" are really
more like personal databases. Extracting structured fields is extremely
important if you want any kind of smart understanding of what is being
journaled.
For example, I could write "weighed in at 135 pounds today", but if I wanted to
find previous instances of this or make any kind of history, I would have to
essentially do a pure text search. However, with structured data this could be
different.
[Logseq], the app that I've settled on, is backed by a real database, and most
importantly exposes a lot of this functionality to you as a user. It allows you
to query directly on properties that you write into your daily journal or any
other page, for example like this:
![recording some property in logseq](./minicross.png)
What you're seeing is me using my daily journals to add a todo item for reading
a paper and tracking how long it takes me to do the [NY Times daily
crossword][minicross] (which I've shortened to minicross). I just add these to
my journal as it comes up throughout my day, but Logseq is able to index this
and serve it back to me in a very structured way:
[datascript]: https://github.com/tonsky/datascript
[minicross]: https://www.nytimes.com/crosswords/game/mini
![performing a query in logseq](./logseqQuery.png)
With this, I could go on to construct a graph and see historical data of how I
did over time. You can see how this could be used for more personal tracking
things like workout records or grocery trackers.
The query tool is very simple and easy to learn, and makes it easy to actually
_use_ the information you wrote down, instead of just burying it into oblivion.
For example, I can write todo items inline in my journal and find them all at a
time as well. Here's all of the todo items that I've tagged specifically with
the tag `#read`:
![reading list in logseq](./readingList.png)
Notice how the paper I added as a todo helpfully shows up here. No need for a
separate todo list or planning tool!
The fact that it truly is a database means I can just shove all kinds of
unrelated information into my journal, do some very trivial labeling and get
some really powerful uses out of it.
In the future I'd like to do dumps for my sleep and health data as well
and have Logseq be my ultimate source of truth. I've started developing a
[calendar plugin for Logseq][2] that will have the ability to display numerical
data using various visualizations for this purpose.
[2]: https://git.mzhang.io/michael/logseq-calendar
> [!admonition: NOTE]
> As an aside, this isn't sponsored in any way. While this post makes me sound
> like just a Logseq shill, it's actually quite the opposite: they're an
> open-source project solely funded by donations. I've been donating to them
> monthly on [Open Collective] and they've been actively developing really cool
> features!
[open collective]: https://opencollective.com/logseq
## Privacy
Because people are dumping so much of their lives into journals, it's absolutely
crucial that boundaries are clear. Without control, this would be a dream come
true for any data collection company: rather than having to go out and gather
the data, users are entering and structuring it all by themselves.
**End-to-end encryption** is a technique that ensures data is never able to be
accessed by your storage or synchronization providers. If you are in the market
for some kind of personal tracking app, make sure it talks about end-to-end
encryption as a feature. While it's [not the end-all-be-all of security][1],
it's certainly a big first step. Do careful research before deciding who to
trust with your data.
[1]: /posts/2021-10-31-e2e-encryption-useless-without-client-freedom

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 630 KiB

View file

@ -0,0 +1,249 @@
---
title: Building a formal CEK machine in Agda
draft: true
date: 2023-09-01T13:53:23.974Z
tags:
- computer-science
- programming-languages
- formal-verification
- lambda-calculus
heroImage: ./header.jpg
heroAlt: gears spinning wallpaper
math: true
toc: true
---
Back in 2022, I took a special topics course, CSCI 8980, on [reasoning about
programming languages using Agda][plfa], a dependently typed meta-language. For
the term project, we were to implement a simply-typed lambda calculus with
several extensions, along with proofs of certain properties.
[plfa]: https://plfa.github.io/
My lambda calculus implemented `call/cc` on top of a CEK machine.
<details>
<summary><b>Why is this interesting?</b></summary>
Reasoning about languages is one way of ensuring whole-program correctness.
Building up these languages from foundations grounded in logic helps us
achieve our goal with more rigor.
As an example, suppose I wrote a function that takes a list of numbers and
returns the maximum value. Mathematically speaking, this function would be
_non-total_; an input consisting of an empty set would not produce reasonable
output. If this were a library function I'd like to tell people who write code
that uses this function "don't give me an empty list!"
But just writing this in documentation isn't enough. What we'd really like is
for a tool (like a compiler) to tell any developer who is trying to pass an
empty list into our maximum function "You can't do that." Unfortunately, most
of the popular languages being used today have no way of describing "a list
that's not empty."
We still have a way to prevent people from running into this problem, though
it involves pushing the problem to runtime rather than compile time. The
maximum function could return an "optional" maximum. Some languages'
implementations of optional values force programmers to handle the "nothing"
case, while others ignore it silently. But in the more optimistic case, even
if the list was empty, the caller would have handled it and treated it
accordingly.
This isn't a pretty way to solve this problem. _Dependent types_ gives us
tools to solve this problem in an elegant way, by giving the type system the
ability to contain values. This also opens its own can of worms, but for
questions about program correctness, it is more valuable than depending on
catching problems at runtime.
</details>
## Crash course on the lambda calculus
The [lambda calculus] is a mathematical abstraction for computation. The core
mechanism it uses is a concept called a _term_. Everything that can be
represented in a lambda calculus is some combination of terms. A term can have
several constructors:
[lambda calculus]: https://en.wikipedia.org/wiki/Lambda_calculus
- **Var.** This is just a variable, like $x$ or $y$. By itself it holds no
meaning, but during evaluation, the evaluation _environment_ holds a mapping
from variable names to the values. If the environment says $\{ x = 5 \}$, then
evaluating $x$ would result in $5$.
- **Abstraction, or lambda ($\lambda$).** An _abstraction_ is a term that describes some
other computation. From an algebraic perspective, it can be thought of as a
function with a single argument (i.e $f(x) = 2x$ is an abstraction, although
it would be written using the notation $\lambda x.2x$)
- **Application.** Application is sort of the opposite of abstraction, exposing
the computation that was abstracted away. From an algebraic perspective,
this is just function application (i.e applying $f(x) = 2x$ to $3$ would
result in $2 \times 3 = 6$. Note that only a simple substitution has been done
and further evaluation is required to reduce $2\times 3$)
### Why?
The reason it's set up this way is so we can reason about terms inductively.
Rather than having lots of syntax for making it easier for programmers to write
a for loop as opposed to a while loop, or constructing different kinds of
values, the lambda calculus focuses on function abstraction and calls, and
strips everything else away.
The idea is that because terms are just nested constructors, we can describe the
behavior of any term by just defining the behavior of these 3 constructors. The
flavorful features of other programming languages can be implemented on top of
the function call rules in ways that don't disrupt the basic function of the
evaluation.
In fact, the lambda calculus is [Turing-complete][tc], so any computation can
technically be reduced to those 3 constructs. I used numbers liberally in the
examples above, but in a lambda calculus without numbers, you could define
integers using a recursive strategy called [Church numerals]. It works like this:
[church numerals]: https://en.wikipedia.org/wiki/Church_encoding
- $z$ represents zero.
- $s$ represents a "successor", or increment function. So:
- $s(z)$ represents 1,
- $s(s(z))$ represents 2
- and so on.
In lambda calculus terms, this would look like:
| number | lambda calculus expression |
| ------ | ---------------------------------- |
| 0 | $\lambda s.(\lambda z.z)$ |
| 1 | $\lambda s.(\lambda z.s(z))$ |
| 2 | $\lambda s.(\lambda z.s(s(z)))$ |
| 3 | $\lambda s.(\lambda z.s(s(s(z))))$ |
In practice, many lambda calculus incorporate higher level constructors such as
numbers or lists to make it so we can avoid having to represent them using only
a series of function calls. However, any time we add more syntax to a language,
we increase its complexity in proofs, so for now let's keep it simple.
### The Turing completeness curse
As I noted above, the lambda calculus is [_Turing-complete_][tc]. One feature of
Turing complete systems is that they have a (provably!) unsolvable "halting"
problem. Most of the simple terms shown above terminate predictably. But as an
example of a term that doesn't halt, consider the _Y combinator_, an example of
a fixed-point combinator:
[tc]: https://en.wikipedia.org/wiki/Turing_completeness
$$
Y = \lambda f.(\lambda x.f(x(x)))(\lambda x.f(x(x)))
$$
That's quite a mouthful. If you tried calling $Y$ on some term, you will find
that evaluation will quickly expand infinitely. That makes sense given its
purpose: to find a _fixed point_ of whatever function you pass in.
> [!admonition: NOTE]
> As an example, the fixed-point of the function $f(x) = \sqrt{x}$ is $1$.
> That's because $f(1) = 1$, and applying $f$ to any other number sort of
> converges in on this value. If you took any number and applied $f$ infinitely
> many times on it, you would get $1$.
>
> In this sense, the Y combinator can be seen as a sort of brute-force approach
> of finding this fixed point by simply applying the function over and over until
> the result stops changing. In the untyped lambda calculus, this can be used to
> implement simple (but possibly unbounded) recursion.
This actually proves disastrous for trying to reason about the logic of a
program. If something is able to recurse on itself without limit, we won't be
able to tell what its result is, and we _definitely_ won't be able to know if
the result is correct. This is why we typically ban unbounded recursion in
proof systems. In fact, you can give proofs for false statements using infinite
recursion.
This is why we actually prefer _not_ to work with Turing-complete languages when
doing logical reasoning on program evaluation. Instead, we always want to add
some constraints on it to make evaluation total, ensuring that we have perfect
information about our program's behavior.
### Simply-typed lambda calculus
The [simply-typed lambda calculus] (STLC, or the notational variant
$\lambda^\rightarrow$) adds types to every term. Types are crucial to any kind
of static program analysis. Suppose I was trying to apply the term $5$ to $6$ (in
other words, call $5$ with the argument $6$ as if $5$ was a function, like
$5(6)$). As humans we can look at that and instantly recognize that the
evaluation would be invalid, yet under the untyped lambda calculus, it would be
completely representable.
[simply-typed lambda calculus]: https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus
To solve this in STLC, we would make this term completely unrepresentable at
all. To say you want to apply $5$ to $6$ would not be a legal STLC term. We do
this by requiring that all STLC terms are untyped lambda calculus terms
accompanied by a _type_.
This gives us more information about what's allowed before we run the
evaluation. For example, numbers may have their own type $\mathbb{N}$ (read
"nat", for "natural number") and booleans are $\mathrm{Bool}$, while functions
have a special "arrow" type $\_\rightarrow\_$, where the underscores represent
other types. A function that takes a number and returns a boolean (like isEven)
would have the type $\mathbb{N} \rightarrow \mathrm{Bool}$, while a function
that takes a boolean and returns another boolean would be $\mathrm{Bool}
\rightarrow \mathrm{Bool}$.
With this, we have a framework for rejecting terms that would otherwise be legal
in untyped lambda calculus, but would break when we tried to evaluate them. A
function application would be able to require that the argument is the same type
as what the function is expecting.
The nice property you get now is that all valid STLC programs will never get
_stuck_, which is being unable to evaluate due to some kind of error. Each term
will either be able to be evaluated to a next state, or is done.
A semi-formal definition for STLC terms would look something like this:
- **Var.** Same as before, it's a variable that can be looked up in the
environment.
- **Abstraction, or lambda ($\lambda$).** This is a function that carries three pieces
of information:
1. the name of the variable that its input will be substituted for
2. the _type_ of the input, and
3. the body in which the substitution will happen.
- **Application.** Same as before.
It doesn't really seem like changing just one term changes the language all that
much. But as a result of this tiny change, _every_ term now has a type:
- $5 :: \mathbb{N}$
- $λ(x:\mathbb{N}).2x :: \mathbb{N} \rightarrow \mathbb{N}$
- $isEven(3) :: (\mathbb{N} \rightarrow \mathrm{Bool}) · \mathbb{N} = \mathrm{Bool}$
> [!admonition: NOTE]
> Some notation:
>
> - $x :: T$ means $x$ has type $T$, and
> - $f · x$ means $f$ applied to $x$
This also means that some values are now unrepresentable:
- $isEven(λx.2x)$ wouldn't work anymore because the type of the inner argument
$λx.2x$ would be $\mathbb{N} \rightarrow \mathbb{N}$ can't be used as an input
for $isEven$, which is expecting a $\mathbb{N}$.
We have a good foundation for writing programs now, but this by itself can't
qualify as a system for computation. We need an abstract machine of sorts that
can evaluate these symbols and actually compute on them.
In practice, there's a number of different possible abstract machines that can
evaluate the lambda calculus. Besides the basic direct implementation, alternate
implementations such as [interaction nets] have become popular due to being able
to be parallelized efficiently.
[interaction nets]: https://en.wikipedia.org/wiki/Interaction_nets
## CEK machine
A CEK machine is responsible for evaluating a lambda calculus term.

View file

@ -0,0 +1,12 @@
---
title: Hindley-Milner type inference
date: 2023-09-01T13:06:55.614Z
tags:
- type-theory
draft: true
---
Today, a lot of languages have a feature called something along the lines of
"type inference". The idea is that through the way variables are passed and
used, you can make guesses about what type it should be, and preemptively point
out invalid or incompatible uses. Let's talk about how it works.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

View file

@ -0,0 +1,28 @@
---
title: How IP routing works
date: 2023-09-01T03:50:38.386Z
tags:
- networking
draft: true
heroImage: ./cableHero.png
heroAlt: futuristic photograph of a bunch of organized network cables
---
Many of us have probably heard of an IP address, but how does it actually work?
I'm going to try to give a high level overview to technical networking concepts.
Throughout this post I'm going to keep referring back to a train station
analogy. We'll start off with a small network and build up into something that
scales into the internet we have today.
## The simplest network
First, the analogy isn't very far off. Just as there's multiple tracks
leading away from a train station, a computer has multiple ports to communicate
with other computers, and we typically call these **interfaces**. For example, a
laptop may have a Wi-Fi _interface_ and an Ethernet _interface_, while a cell
phone may have a cellular _interface_ as well. Server computers could have any
number of interfaces.
You could imagine a simple network between 3 computers like this:

View file

@ -0,0 +1,48 @@
---
import { Code } from "astro:components";
interface Props {
code: string;
resultName?: string | string[];
}
let { code, resultName } = Astro.props;
// Detect common whitespace
let longestCommonWhitespace: number | null = null;
for (const line of code.split("\n")) {
if (line.trim().length === 0) continue;
const startingWhitespace = line.match(/^(\s+)/)!;
const len = startingWhitespace[1].length;
if (longestCommonWhitespace === null || len < longestCommonWhitespace)
longestCommonWhitespace = len;
}
code = code
.split("\n")
.map((line) => {
if (line.trim().length === 0) return "";
return line.substring(longestCommonWhitespace);
})
.join("\n")
.trim();
// Strip some characters from it
code = code.trim();
let scriptCode = code;
if (typeof resultName === "string") scriptCode += `\n${resultName};`;
else if (Array.isArray(resultName)) scriptCode += `\n[${resultName.join(", ")}];`;
---
<!-- <Code code={code} lang="js" theme="github-dark" /> -->
<Code code={code} lang="js" theme="css-variables" />
<script define:vars={{ resultName, scriptCode }}>
const result = eval?.(scriptCode);
if (typeof resultName === "string") window[resultName] = result;
else if (Array.isArray(resultName)) {
resultName.forEach((name, i) => {
window[name] = result[i];
});
}
</script>

View file

@ -0,0 +1,33 @@
---
import { nanoid } from "nanoid";
import "./style.scss";
interface Props {
label?: string;
id?: string;
runAction: string;
}
const { label, id, runAction } = Astro.props;
const codeId = id ?? nanoid();
const scriptCode = `
javascript:((displayResult) => {
${runAction}
})((result) => {
const el = document.getElementById("${codeId}");
el.innerText = result.toString();
const stamp = document.getElementById("${codeId}-stamp");
stamp.innerText = new Date().toISOString();
});
`;
---
<button onclick={scriptCode}>{label ?? "Run"}</button>
<div class="result">
<pre id={codeId}></pre>
<small>
Last run:
<span id={`${codeId}-stamp`}></span>
</small>
</div>

View file

@ -0,0 +1,244 @@
---
title: Compiler from scratch
date: 2023-09-08T06:17:00.840Z
tags:
- programming-languages
draft: true
toc: true
---
import CodeBlock from "./CodeBlock.astro";
import Playground from "./Playground.astro";
Just for fun, let's write a compiler that targets WebAssembly.
I'm writing this post as I'm discovering how this works, so join me on my journey!
## Producing a working "binary"
I don't know how WebAssembly actually works, so here's some of the resources I'm
consulting:
- https://developer.mozilla.org/en-US/docs/WebAssembly/Using_the_JavaScript_API
- https://webassembly.github.io/spec/core/index.html
A compiler for a general language is quite an undertaking, so let's start with
the proverbial "Hello, world" program, just to write some output to the screen.
This ...isn't very clear either. First of all, how do we even get output from
WebAssembly?
Well, it looks like according to [this][exported-functions] document, you can
essentially mark certain wasm concepts as "exported", and access them from
`obj.instance.exports`. Let's start by trying to export a single number.
[exported-functions]: https://developer.mozilla.org/en-US/docs/WebAssembly/Exported_functions
### Returning a number from WebAssembly
We can use tables to export a number from wasm to JavaScript so we can access it
and print it to the screen. Based on the [MDN example], we can tell that we'll
need to be able to export modules, functions, and tables. We can use the [binary
format spec] to figure out how to produce this info.
[mdn example]: https://github.com/mdn/webassembly-examples/blob/5a2dd7ca5c82d19ae9dd25d170e7ef5e9f23fbb7/js-api-examples/table.wat
[binary format spec]: https://webassembly.github.io/spec/core/binary/index.html
Starting off, a class that we can start writing binary data to:
<CodeBlock
code={`
function WasmWriter(size) {
this.buffer = new ArrayBuffer(size ?? 1024);
this.cursor = 0;
}
// Helper function for displaying the number of bytes written as an array
WasmWriter.prototype.asArray = function() { return [...new Uint8Array(this.buffer.slice(0, this.cursor))]; };
WasmWriter.prototype.display = function() { return "[" + this.asArray().map(x => x.toString(16).padStart(2, '0')).join(", ") + "]"; };
`}
/>
<Playground runAction={`displayResult(new WasmWriter().display());`} />
We'll want to write some stuff into it. Like bytes?
<CodeBlock
code={`
WasmWriter.prototype.write = function(obj) {
const len = obj.len?.();
const view = new Uint8Array(this.buffer);
obj.write({
recurse: (obj) => this.write(obj),
emit: (byte) => { view[this.cursor] = byte; this.cursor += 1 }
});
};
`}
/>
Or [integers][int spec]? Let's use the [algorithm given on Wikipedia][uleb algo]
here.
[int spec]: https://webassembly.github.io/spec/core/binary/values.html#integers
[uleb algo]: https://en.wikipedia.org/wiki/LEB128#Unsigned_LEB128
<CodeBlock
resultName="UInt"
code={`
class UInt {
constructor(num) { this.num = num; }
write({ emit }) {
let num = this.num;
if (num === 0) { emit(0); return }
do {
let byte = num % 128;
num = num >> 7;
if (num !== 0) byte = byte | 128;
emit(byte);
} while(num !== 0);
};
}
`}
/>
<Playground
label="Encode some ints"
runAction={`
let ints = [10, 100, 1000, 10000, 100_000];
displayResult(ints.map(x => {
const writer = new WasmWriter();
writer.write(new UInt(x));
return \`\${x} encodes to \${writer.display()}\`;
}).join("\\n"));
`}
/>
Perfect. What do we still need to encode a complete WebAssembly program? Reading
[this][binary modules spec], I guess we'll need functions, tables, and modules.
Let's keep going, starting with [functions][func type spec].
[binary modules spec]: https://webassembly.github.io/spec/core/binary/modules.html#binary-module
[func type spec]: https://webassembly.github.io/spec/core/binary/types.html#binary-functype
<CodeBlock
resultName={["Vec", "ResultType", "NumType", "FuncType"]}
code={`
class Vec {
constructor(items) { this.items = items; }
write({ recurse }) {
recurse(new UInt(this.items.length));
this.items.forEach(item => recurse(item));
}
}
class ResultType {
constructor(valTypes) { this.valTypes = valTypes; }
write({ recurse }) { recurse(new Vec(this.valTypes)); }
}
class NumType {
constructor(type) { this.type = type; }
write({ emit }) {
emit({ "i32": 0x7f, "i64": 0x7e,
"f32": 0x7d, "f64": 0x7c }[this.type]);
};
}
class FuncType {
constructor(rt1, rt2) { this.rt1 = rt1; this.rt2 = rt2; }
write({ emit, recurse }) { emit(0x60); recurse(this.rt1); recurse(this.rt2); };
}
`}
/>
If you run this, you'll see that it prints out what we expected:
- `0x60` designates that it's a function type
- `0x00` means the list of parameter types is empty
- `0x01` means the list of return types has 1 item
- that item is `0x7f`, corresponding to `i32`
<Playground
label="Encode [] -> [i32] function"
runAction={`
const writer = new WasmWriter();
writer.write(new FuncType(
new ResultType([]),
new ResultType([new NumType("i32")]),
));
displayResult(writer.display());
`}
/>
Now, on to [tables][table type spec]:
[table type spec]: https://webassembly.github.io/spec/core/binary/types.html#table-types
<CodeBlock
resultName={["TableType", "RefType", "Limit"]}
code={`
class TableType {
constructor(et, lim) { this.et = et; this.lim = lim; }
write({ recurse }) { recurse(this.et); recurse(this.lim); }
}
class RefType {
constructor(type) { this.type = type; }
write({ emit }) { emit({"func": 0x70, "extern": 0x6f}[this.type]) }
}
class Limit {
constructor(min, max) { this.min = min; this.max = max; }
write({ emit, recurse }) {
const min = new UInt(this.min), max = new UInt(this.max);
if (this.max) { emit(0x1); recurse(min); recurse(max); }
if (this.max) { emit(0x0); recurse(min); }
}
}
`}
/>
<Playground
label="Encode a table of functions with limit [1, 5]"
runAction={`
const writer = new WasmWriter();
writer.write(new TableType(
new RefType("func"),
new Limit(1, 5),
));
displayResult(writer.display());
`}
/>
#### Module
Ok, let's put this all together and make a [module][module spec]!
[module spec]: https://webassembly.github.io/spec/core/binary/modules.html#binary-module
<CodeBlock
resultName={["Module"]}
code={`
class Module {
constructor(sections) { this.sections = sections; }
write({ emit, recurse }) {
emit(0x00); emit(0x61); emit(0x73); emit(0x6d);
emit(0x01); emit(0x00); emit(0x00); emit(0x00);
this.sections.map(section => recurse(section));
}
}
`}
/>
<Playground
label="Encode a module!!"
runAction={`
const writer = new WasmWriter();
writer.write(new Module([
new FuncType(new ResultType([]), new ResultType([new NumType("i32")])),
new TableType(new RefType("func"), new Limit(0)),
]));
displayResult(writer.display());
`}
/>

View file

@ -0,0 +1,4 @@
.result {
border: 1px solid red;
padding: 6px;
}

View file

@ -0,0 +1,122 @@
---
title: Equality
date: 2023-09-15T05:36:53.757Z
tags:
- type-theory
draft: true
---
When learning type theory, it's important to make a distinction between several
kinds of "same"-ness. Whenever you talk about two things being equal, you'd want
to qualify it with one of these in order to make it clear which you're referring
to, since they're very different concepts.
- **Definitional**, or **judgmental** equality. This is usually written with
$\equiv$ in math. If you see $x \equiv y$, this says "I've defined $x$ and $y$
to be the same, so anytime you see $x$, you can replace it with $y$ and vice
versa."
> [!admonition: NOTE]
> Technically speaking, definitional and judgmental are two separate concepts
> but coincide in many type theories. You could think of multiple levels of
> equalities like this:
>
> - Does $2$ equal $2$? Yes. This is the same object, and equality is reflexive.
> - Does $s(s(z))$ equal $2$? Well, typically this is how natural numbers are
> defined in type systems: $2 \equiv s(s(z))$. If somewhere before this, you
> said "let $2$ equal $s(s(z))$", then _by definition_ these are identical.
> - Does $1 + 1$ equal $2$? The operation $+$ is a function, and requires a
> &beta;-reduction to evaluate any further. This choice really comes down to the
> choice of the type theory designer; if you wanted a more intensional type
> theory, you could choose to not allow &beta;-reduction during evaluation of
> identities. Most type theories _do_ allow this though.
> - Does $x = y$ for any $x$ and $y$?
>
> As far as I understand, you can choose whatever to _be_ your "definitional"
> equality,
- **Propositional** equality. This describes a _value_ that potentially holds
true or false. This is typically written $=$ in math.
## Helpful tips
- You can talk about propositional equalities conditionally, like "if $a$ equals
$b$, then $c$ is true". This kind of expression doesn't really make sense for
judgmental equality, like in the following example:
Suppose you made the judgment: $x = 2$. It makes no sense to then say "if $x$
equals $2$, then $c$ is true." You can just skip right to $c$ is true, because
you've defined it to be true, so it's useless even thinking about
the case where it's false because that would be a part of a completely
different world entirely.
- Here's a concrete example of the difference between judgmental and
propositional equality. Consider these 2 expressions:
- $2 + 3 = 3 + 2$
- $\forall x\ y \in \mathbb{N} . (x + y = y + x)$
In the first case, performing &beta;-reduction on both terms reduces to $5 =
5$, which is definitionally equal by virtue of being the exact same symbol.
In the second case however, &beta;-reduction is impossible. Without a specific
$x$ or $y$, there's actually no way to reduce either side more. You would need
some other constructs to prove it, such as induction.
- Suppose you were expressing these concepts in a language like TypeScript. The
analog of a judgment would be something like:
```ts
type Foo = string;
```
Any time you're using `Foo`, you really don't have to compare it with
`string`, because just through a simple substitution you will get the same
symbol:
```
function sayHello(who: string) { console.log(`Hello, ${who}`); }
function run(foo: Foo) { sayHello(foo); }
```
On the other hand, a propositional equality type might look something like
this instead:
```ts
class Equal<A, B> {
private constructor() {}
static refl<A>(): Equal<A, A> {}
}
```
You could have a value of this type, and query on its truthiness:
```ts
type TwoEqualsTwo = Equal<Two, 2>;
function trans<A, B, C>(x: Equal<A, B>, y: Equal<B, C>): Equal<A, C> {
// magic...
}
```
## Notation gotcha
The syntax we use in most math papers and other literature and the syntax we use
for equalities in various dependently-typed programming languages are annoyingly
different:
| | Math | Agda | Coq | Idris |
| ------------- | ------------ | ------- | -------- | ------- |
| Judgmental | $x \equiv y$ | `x = y` | `x := y` | `x = y` |
| Propositional | $x = y$ | `x ≡ y` | `x = y` | `x = y` |
Usually there'll be some other pre-fix way of writing the propositional equals
in your language too. In Agda, it's `Path {A} x y`.
## More reading
- https://ncatlab.org/nlab/show/definitional+equality
- https://ncatlab.org/nlab/show/judgment
- https://ncatlab.org/nlab/show/judgmental+equality
- https://ncatlab.org/nlab/show/propositional+equality
- https://hott.github.io/book/hott-ebook.pdf.html

View file

@ -0,0 +1,165 @@
---
title: DTT Project
date: 2023-10-11T04:05:23.082Z
draft: true
toc: true
tags:
- type-theory
---
References:
- https://www.cl.cam.ac.uk/~nk480/bidir.pdf
<details>
<summary>
Agda Imports
<small>for the purpose of type-checking</small>
</summary>
```agda
{-# OPTIONS --allow-unsolved-metas --allow-incomplete-matches #-}
open import Data.Nat
open import Data.Product
open import Data.String hiding (_<_)
open import Relation.Nullary using (Dec)
open import Relation.Nullary.Decidable using (True; toWitness)
```
</details>
## Damas-Hindley-Milner type inference
Unification-based algorithm for lambda calculus.
## First try
Implement terms, monotypes, and polytypes:
```agda
data Term : Set
data Type : Set
data Monotype : Set
```
Regular lambda calculus terms:
$e ::= x \mid () \mid \lambda x. e \mid e e \mid (e : A)$
```agda
data Term where
Unit : Term
Var : String → Term
Lambda : String → Term → Term
App : Term → Term → Term
Annot : Term → Type → Term
```
Polytypes are types that may include universal quantifiers ($\forall$)
$A, B, C ::= 1 \mid \alpha \mid \forall \alpha. A \mid A \rightarrow B$
```agda
data Type where
Unit : Type
Var : String → Type
Existential : String → Type
Forall : String → Type → Type
Arrow : Type → Type → Type
```
Monotypes (usually denoted $\tau$) are types that aren't universally quantified.
> [!admonition: NOTE]
> In the declarative version of this algorithm, monotypes don't have existential quantifiers either,
> but the algorithmic type system includes it.
> TODO: Explain why
```agda
data Monotype where
Unit : Monotype
Var : String → Monotype
Existential : String → Monotype
Arrow : Monotype → Monotype → Monotype
```
### Contexts
```agda
data Context : Set where
Nil : Context
Var : Context → String → Context
Annot : Context → String → Type → Context
UnsolvedExistential : Context → String → Context
SolvedExistential : Context → String → Monotype → Context
Marker : Context → String → Context
contextLength : Context →
contextLength Nil = zero
contextLength (Var Γ _) = suc (contextLength Γ)
contextLength (Annot Γ _ _) = suc (contextLength Γ)
contextLength (UnsolvedExistential Γ _) = suc (contextLength Γ)
contextLength (SolvedExistential Γ _ _) = suc (contextLength Γ)
contextLength (Marker Γ _) = suc (contextLength Γ)
-- https://plfa.github.io/DeBruijn/#abbreviating-de-bruijn-indices
postulate
lookupVar : (Γ : Context) → (n : ) → (p : n < contextLength Γ) Set
-- lookupVar (Var Γ x) n p = {! !}
-- lookupVar (Annot Γ x x₁) n p = {! !}
-- lookupVar (UnsolvedExistential Γ x) n p = {! !}
-- lookupVar (SolvedExistential Γ x x₁) n p = {! !}
-- lookupVar (Marker Γ x) n p = {! !}
data CompleteContext : Set where
Nil : CompleteContext
Var : CompleteContext → String → CompleteContext
Annot : CompleteContext → String → Type → CompleteContext
SolvedExistential : CompleteContext → String → Monotype → CompleteContext
Marker : CompleteContext → String → CompleteContext
```
### Well-Formedness
```agda
type-well-formed : (A : Type) → Set
type-well-formed Unit = {! !}
type-well-formed (Var x) = {! !}
type-well-formed (Existential x) = {! !}
type-well-formed (Forall x A) = {! !}
type-well-formed (Arrow A A₁) = {! !}
```
### Type checking
```agda
postulate
check : (Γ : Context) → (e : Term) → (A : Type) → Context
```
```agda
-- check Γ Unit A = Γ
```
```agda
-- check Γ (Var x) A = {! !}
-- check Γ (Lambda x e) A = {! !}
-- check Γ (App e e₁) A = {! !}
-- check Γ (Annot e x) A = {! !}
```
### Type synthesizing
```js
const x = () => {};
```
```agda
postulate
synthesize : (Γ : Context) → (e : Term) → (Type × Context)
-- synthesize Γ Unit = Unit , Γ
-- synthesize Γ (Var x) = {! !}
-- synthesize Γ (Lambda x e) = {! !}
-- synthesize Γ (App e e₁) = {! !}
-- synthesize Γ (Annot e x) = {! !}
```

View file

@ -0,0 +1,147 @@
---
title: "Path induction: a GADT perspective"
slug: 2023-10-23-path-induction-gadt-perspective
date: 2023-10-23
tags: ["type-theory", "pl-theory"]
---
<details>
<summary>Imports</summary>
```
-- These types aren't actually imported to improve CI performance :(
-- It doesn't matter what they are, just that they exist
data Int : Set where
postulate
String : Set
{-# BUILTIN STRING String #-}
data _≡_ {l} {A : Set l} : (a b : A) → Set l where
instance refl : {x : A} → x ≡ x
```
</details>
> [!admonition: NOTE]
> This content is a writeup from a weekend discussion session for the fall 2023 special-topics course CSCI 8980 at the University of Minnesota taught by [Favonia], who provided the examples.
> This post is primarily a summary of the concepts discussed.
An important concept in [Martin-Löf Type Theory][mltt] is the internal equality[^1] type $\mathrm{Id}$.
Like all inductive types, it comes with the typical rules used to introduce types:
[^1]:
"Internal" here is used to mean something expressed within the type theory itself, rather than in the surrounding meta-theory, which is considered "external."
For more info, see [this][equality] page.
- Formation rule
- Term introduction rule
- Term elimination rule
- Computation rule
There's something quite peculiar about the elimination rule for $\mathrm{Id}$ in particular (commonly known as "path induction", or just $J$).
Let's take a look at its definition, in Agda syntax:
```agda
J : {A : Set}
→ (C : (x y : A) → x ≡ y → Set)
→ (c : (x : A) → C x x refl)
→ (x y : A) → (p : x ≡ y) → C x y p
J C c x x refl = c x
```
<details>
<summary>What does this mean?</summary>
An _eliminator_ rule defines how a type is used.
It's the primitive that often powers programming language features like pattern matching.
We can break this function down into each of the parameters it takes:
- $C$ is short for "motive".
Think of $J$ as producing an $\mathrm{Id} \rightarrow C$ function, but we have to include the other components or else it's not complete.
- $c$ tells you how to handle the _only_ constructor to $\mathrm{Id}$, which is $\mathrm{refl}$.
Think of this as a kind of pattern match on the $\mathrm{refl}$ case, since $\mathrm{Id}$ is just a regular inductive type.
- $x, y, p$ these are just a part of the final $\mathrm{Id} \rightarrow C$ function.
How $J$ is computed depends on your type theory's primitives; in HoTT you would define it in terms of something like transport.
</details>
There's something odd about this: the $c$ case only defines what happens in the case of $C(x, x, \mathrm{refl})$, but the final $J$ definition extends to arbitrary $C(x, y, p)$.
How can this be the case?
A good way to think about this is using [generalized algebraic data types][gadt], or GADTs.
A GADT is similar to a normal inductive data type, but it can be indexed by a type.
This is similar to polymorphism on data types, but much more powerful.
Consider the following non-generalized data type:
```agda
data List (A : Set) : Set where
Nil : List A
Cons : A → List A → List A
```
I could write functions with this, but either [polymorphically][polymorphism] (accepts `A : Set` as a parameter, with no knowledge of what the type is) or monomorphically (as a specific `List Int` or `List Bool` or something).
What I couldn't do would be something like this:
[polymorphism]: https://wiki.haskell.org/Polymorphism
```text
sum : (A : Set) → List A → A
sum Int Nil = 0
sum Int (Cons hd tl) = hd + (sum tl)
sum A Nil = {! !}
sum A (Cons hd tl) = {! !}
```
Once I've chosen to go polymorphic, there's no option to know anything about the type, and I can only operate generally on it[^2].
[^2]:
As another example, if you have a polymorphic function with the type signature $\forall A . A \rightarrow A$, there's no implementation other than the $\mathrm{id}$ function, because there's no other knowledge about the type.
For more info, see [Theorems for Free][free]
With GADTs, this changes.
The key here is that different constructors of the data type can return different types of the same type family.
```
data Message : Set → Set₁ where
S : String → Message String
I : Int → Message Int
F : {T : Set} → (f : String → T) → Message T
```
Note that in the definition, I've moved the parameter from the left side to an [_index_][index] on the right of the colon.
This means I'm no longer committing to a fully polymorphic `A`, which is now allowed to be assigned anything freely.
In particular, it's able to take different values for different constructors.
[index]: https://agda.readthedocs.io/en/v2.6.4/language/data-types.html#indexed-datatypes
This allows me to write functions that are polymorphic over _all_ types, while still having the ability to refer to specific types.
```agda
extract : {A : Set} → Message A → A
extract (S s) = s
extract (I i) = i
extract (F f) = f "hello"
```
Note that the type signature of `extract` remains fully polymorphic, while each of the cases has full type information.
This is sound because we know exactly what indexes `Message` could take, and the fact that there are no other ways to construct a `Message` means we won't ever run into a case where we would be stuck on a case we don't know how to handle.
In a sense, each of the pattern match "arms" is giving more information about the polymorphic return type.
In the `S` case, it can _only_ return `Message String`, and in the `I` case, it can _only_ return `Message Int`.
We can even have a polymorphic constructor case, as seen in the `F` constructor.
The same thing applies to the $\mathrm{Id}$ type, since $\mathrm{Id}$ is pretty much just a generalized and dependent data type.
The singular constructor `refl` is only defined on the index `Id A x x`, but the type has a more general `Id A x y`.
So the eliminator only needs to handle the case of an element of $A$ equal to itself, because that's the "only" constructor for $\mathrm{Id}$ in the first place[^3].
[^3]: Not true in [homotopy type theory][hott], where the titular _univalence_ axiom creates terms in the identity type using equivalences.
Hopefully now the path induction type doesn't seem as "magical" to you anymore!
[mltt]: https://ncatlab.org/nlab/show/Martin-L%C3%B6f+dependent+type+theory
[equality]: https://ncatlab.org/nlab/show/equality#notions_of_equality_in_type_theory
[gadt]: https://en.wikipedia.org/wiki/Generalized_algebraic_data_type
[free]: https://www2.cs.sfu.ca/CourseCentral/831/burton/Notes/July14/free.pdf
[favonia]: https://favonia.org/
[hott]: https://homotopytypetheory.org/book/

View file

@ -0,0 +1,145 @@
---
title: The Lambda Calculus
date: 2024-04-04T21:45:28.264
draft: true
toc: true
languages: ["typst"]
tags: ["typst"]
---
The lambda calculus is an abstract machine for modeling computation. In this
tutorial I will build up the lambda calculus from scratch.
## Expressions
Let's start with expressions. You've probably encountered these in math class.
They look like things like:
- $3$
- $5 \times 5$
- $2\sqrt{12} - 5i$
Some of these probably look like they can be simplified. That's fine! We're not
worried about that yet. The computation aspect will come in a bit.
### Expression Trees
The important thing here is that all of these have some tree-like structure. For
example, look at $5 \times 5$:
```typst
#import "@preview/fletcher:0.4.3" as fletcher: *
#set page(width: auto, height: auto, margin: (x: 0.25pt, y: 0.25pt))
#set text(18pt)
#diagram(cell-size: 0.5cm, $
& times edge("dl", ->) edge("dr", ->) \
5 & & 5 \
$)
```
And that last one, $2\sqrt{12} - 5i$:
```typst
#import "@preview/fletcher:0.4.3" as fletcher: *
#import math
#set page(width: auto, height: auto, margin: (x: 0.5em, y: 0.5em))
#set text(18pt)
#diagram(cell-size: 0.5cm, {
node(pos: (0, 0), label: $-$)
node(pos: (-1, 1), label: $times$)
node(pos: (1, 1), label: $times$)
edge((0, 0), (-1, 1))
edge((0, 0), (1, 1))
node(pos: (-1.5, 2), label: $2$)
node(pos: (-0.5, 2), label: $math.sqrt(...)$)
node(pos: (-0.5, 3), label: $12$)
edge((-1, 1), (-1.5, 2))
edge((-1, 1), (-0.5, 2))
edge((-0.5, 2), (-0.5, 3))
node(pos: (0.5, 2), label: $5$)
node(pos: (1.5, 2), label: $i$)
edge((1, 1), (0.5, 2))
edge((1, 1), (1.5, 2))
})
```
These follow the order of operations (also known as PEMDAS). They tell you that
_if_ you were going to evaluate this, you would want to apply it in this
fashion.
If you look at each point in the tree, you might notice that there's several different types of branches:
- Numbers, like $2$, $5$, or $12$ don't have any children
- The square root function, $\sqrt{...}$ has 1 child
- The subtraction ($-$) and multiplication ($\times$) functions have 2 children
The important thing to us is to define a **language** for expressions. This
way, we know exactly what kinds of expressions we can build. Let's take the
few operations in the list above and turn it into a **language** for writing
expressions:
$$
e ::= n \mid \sqrt{e} \mid e - e \mid e \times e
$$
We call these **constructors** of the expression. The notation is a bit dense,
but this essentially means $e$, which is shorthand for _expression_, is defined
to be either of ($|$):
- $n$, which is just a convention meaning "any number"
- $\sqrt{e}$, which means "square root of another expression"
- $e - e$ and $e \times e$, which correspond to subtraction and multiplication respectively
If you look closely, the number of $e$s in each option corresponds exactly to
the number of children it had in the expression tree.
Let's write down an expression language for the lambda calculus. In its simplest form, it looks like this:
$$
e ::= x \mid \lambda x.e \mid e\; e
$$
Here, $x$ is a convention that stands for "some variable". Without knowing what any of this means, we can already start putting together some expressions:
- $\lambda x.x$
- $\lambda s.(\lambda z.(s\; z))$
As an exercise, try drawing some of the expression trees that correspond to
these expressions. Once you're familiar enough with how expressions are built
syntactically, we can talk about evaluation.
## Evaluation
The expression language we just defined is typically considered the _statics_ of
the language. It defines how we can write down the language. What we're going to
talk about now is the _dynamics_, or how it's evaluated.
<details>
<summary>Semantics vs. Implementation</summary>
At this point it's probably a good idea to make a note about _semantics_ vs. _implementation_.
Semantics describe the outcome. If my arithmetic language defines
multiplication's _semantics_ it would require that multiplication of two numbers
to achieve another number that has some certain properties, like "repeatedly
subtracting the first number the same number of times as the second number
produces 0."
Implementation, on the other hand, needs to conform to the semantics. If I asked
you to compute $15 \times 16$ by hand, you'd probably bust out some pencil and
paper and do some long form multiplication, where you'd compute a couple of
intermediate results and add them, getting $240$.
A computer, on the other hand, notices $16 = 10000_2$, and just shifts $15 =
0111_2$ over by 4 bits, to get $01110000_2 = 240$. The ways that this same
result was computed were different, but they achieved the same final result, and
that result has the property required by the semantics of multiplication.
Just like the example above, the lambda calculus has several different
implementations of its semantics (for example, the CESK machine or the SECD
machine). They have a more complex stack structure than a straightforward
machine implementation in, for example Python, to take. However, we can prove
that the semantics are the same.
</details>

Binary file not shown.

After

Width:  |  Height:  |  Size: 704 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

View file

@ -0,0 +1,315 @@
---
title: My Venture into Dance Dance Revolution
date: 2024-05-02T15:25:14.268
tags: ["rhythm-games", "life"]
---
import { Image } from 'astro:assets';
I'm a huge fan of rhythm games.
Up til now, I've mainly been playing PC / mobile rhythm games such as [osu!], [Muse Dash], [Project Sekai], [Arcaea], etc.
But since the pandemic has let up, I've been able to go to arcades more frequently and found myself climbing the levels in games like [Chunithm], [Sound Voltex], [WACCA], and dance games like [Dance Dance Revolution (DDR)][DDR].
[osu!]: https://osu.ppy.sh/
[Muse Dash]: https://store.steampowered.com/app/774171/Muse_Dash/
[Project Sekai]: https://www.colorfulstage.com/
[Arcaea]: https://arcaea.lowiro.com
[Chunithm]: https://en.wikipedia.org/wiki/Chunithm
[Sound Voltex]: https://en.wikipedia.org/wiki/Sound_Voltex
[WACCA]: https://en.wikipedia.org/wiki/Wacca_(video_game)
[DDR]: https://en.wikipedia.org/wiki/Dance_Dance_Revolution
Since January of this year (2024), I have been going to the [Dave and Buster's in Maple Grove][2] pretty regularly to play DDR.
It's a great rhythm game that's been around for about as long as I've been alive, and it's also a nice workout.
I regularly hit more than 1,000 calories burned per session according to the in-game tracker.
[2]: https://www.daveandbusters.com/us/en/about/locations/maple-grove
Today I want to share some basics of how the game works as well as going more in-depth about some tips and tricks I've learned in the last 4 months or so of playing this game more seriously.
If you're considering getting started in DDR, please give this doc a read!
The game
---
The game itself certainly doesn't need any introduction, but I'm going to give one anyway.
It's a four-button vertical scrolling rhythm game produced by Konami's rhythm game division [BEMANI].
Players step on the buttons to the music, matching arrows on the screen.
[Bemani]: https://en.wikipedia.org/wiki/Bemani
![DDR cabinet](./cab.jpg)
The game is conceptually very simple, but like all rhythm games, has an incredibly high skill ceiling.
Just go on YouTube and search for DDR tournaments and you can see how insane some of the top players are.
Higher level charts are faster, introduce more technical moves, and have more gimmicky mechanics.
Logistics
---
I primarily play at my local Dave and Busters, but I discovered that not all Dave and Busters locations have an online DDR machine.
For the Minneapolis area, the Discord has some pretty up-to-date info on where the locations of working cabs are, check the [zenius-i-vanisher] website.
[MNDiscord]: https://discord.com/invite/bAQ9S9mRZp
[zenius-i-vanisher]: https://zenius-i-vanisher.com/v5.2/arcades.php
Although you could play on any cab, I've been looking for online cabs in particular.
The online cabs let you play with an _eAmusement Pass_, which comes with a lot of features I'm going to list below.
But unless you've got access to a Round 1 near you, these suckers are hard to find.
Your best bet is probably to order a card from online in that case.
![eAmusement card](./eamuse.png)
Having one of these cards means:
- Saves scores across machines (**BIG**)
- Saves configuration settings
- You can unlock songs
- You don't have to skip the tutorial each time
- More granular scroll speed changes
You can also sign up for a paid course known as the "Basic" course.
This includes more features, but costs 330 yen and has a [more involved registration process][1].
This gives you:
[1]: https://3icecream.com/tutorial/add-basic-course-guide
- Ability to see score history (**BIG**)
- Ability to add rivals and see their scores live in game
- Darker in-game background
- Fast/slow indicators in game
The ability to see score history is big, because it means you can track your scores, which in my opinion is a big part of the self-improvement aspect of continuing to play this game.
In particular, being able to see improvement over time as well as tracking overall number of clears / full combo scores can help guide your song choices for future sessions.
Also, Konami frequently performs maintenance on their servers, which means online capabilities like score saving will not be available.
Make sure to have these dates down.
Gameplay
---
The most basic elements of DDR are:
- Arrows
- Freeze
- Jumps
Arrows are basically just simple steps. If it says to step on a button, then step on it.
import arrows from './arrows.png';
<p><Image src={arrows} alt="arrows" height="240" /></p>
Freeze is the long green button. If it appears, hold it until the green track is over.
Unlike other rhythm games, holds do _not_ have release judgement.
This means it doesn't matter if you release it perfectly on time.
import freeze from './freeze.png';
<p><Image src={freeze} alt="freeze" height="240" /></p>
Jumps are when two arrows occur at the same time.
It could also be an arrow and a freeze note.
import jumps from './jumps.png';
<p><Image src={jumps} alt="jumps" height="240" /></p>
None of these concepts are too hard by themselves, but once you begin grouping them together, they can become incredibly difficult.
There's also a few charts with _shock arrows_, where if you're standing on any of the arrows, then it counts as a miss for the next couple notes.
These are rather unintuitive to play with initially, and I haven't really gotten the hang of it yet, so I assume it just gets easier with practice.
### Scroll Speed
The scroll speed of the arrows is calculated by the $\textrm{BPM} \times \textrm{Multiplier}$.
During song select, the BPM will be shown on the top near the title of the song.
The multiplier is something you set in the options menu, by pressing [9] at the song select screen.
When you're starting out, use this to determine your reading speed.
For example, if you're comfortable reading 2.0x on a 150 BPM song, that means your reading speed is around $2.0 \times 150 = 300$.
With this knowledge, when you see a 200 BPM song, you can divide $300 / 200 = 1.5$ to figure out that you need a 1.5x multiplier to read it comfortably.
Some tools like [3icecream] have a BPM calculator for charts if you enter in your reading speed, but often using your phone's calculator or just memorizing some common benchmark BPMs will do just fine.
> [!admonition: NOTE]
> Since every song you play might be a different BPM, always check to make sure the scroll speed is what you want!
If a song's BPM is variable, it will show a range instead.
**Be careful when you see this!**
This means the song may change BPM in the middle.
There's really no telling _how_ it will change before you play it, unless you look up the chart beforehand on something like [3icecream], which will tell you where all the BPM changes occur.
Generally when I see this, I treat the song as if it's whatever the higher end of that range will be.
For example, if it says 85~170, I'm going to assume it's 170 BPM.
This way, there's never a part that's too fast for me to read.
Unfortunately, for some extreme cases (usually on charts from older games), this will make the slow parts almost impossible to read.
That's just how the game works, so brush up on those slow scroll speed reading skills and hope you make it through this mess.
import mess from './mess.png';
<p><Image src={mess} alt="mess" height="240" /></p>
There's also times when the chart will completely stop for a bit and continue.
Usually this is done to emphasize something in the song.
Other times it's just to mess with you.
Unfortunately, just like the BPM changes, there's not really a good way of knowing where the stops happen ahead of time, so either watch a video of the auto playthrough ahead of time, or just pray.
An example of a song with a lot of stops is [CHAOS][6].
[6]: https://ddr.stepcharts.com/SuperNOVA/CHAOS/single-challenge
Techs
---
Every rhythm game has its own technical moves, or "techs" as I call them.
These are patterns that are generally not as intuitive for new players, but are a skill developed over time.
Once you see how these work, you start seeing them in charts and begin incorporating the techniques into your normal gameplay.
In DDR, especially in earlier stages, you typically want to avoid "double stepping", which just means stepping on two arrows with the same foot in succession.
This is because it's not really fast, and you strain the foot and make it harder to step again quickly.
So most of these patterns make it easier to alternate feet in order to move faster.
Once you incorporate these techs into your gameplay, it'll be easier to make the judgement of whether or not to cross feet for the alternating, or to just double step.
Of course, there are scenarios where double stepping is required.
I've found myself in situations where I mispredicted the pattern, and a double step would've continued easier.
If you're having trouble with a section, you could always study the chart ahead of time and remember where the double step / crosses occur but generally this should be avoided.
**Crossovers.**
This pattern takes the form of left-down-right or left-up-right, but you'll see it in all different shapes and forms.
Whenever you see this pattern, you want to take the foot that's not in the middle and move it across to the other side.
For example, in the excerpt from [SUNKiSS♥DROP\~jun side\~ ESP][3] below, this entire pattern can be done by alternating feet.
[3]: https://ddr.stepcharts.com/SuperNOVA2/SUNKiSS-DROP~jun-Side~/single-expert
import crossover from './crossover.png';
<p><Image src={crossover} alt="crossover" height="240" /></p>
**Scoobys.**
This pattern involves two sets of three notes going in the same direction but using different middle arrows.
If the middle is both up or both down, then this pattern doesn't work.
When you see this, you enter it just like a crossover, but use your other foot to hit the 4th note.
For example, in the excerpt from [Cirno's Perfect Math Class ESP][4] below, this entire pattern can be done by alternating feet.
[4]: https://ddr.stepcharts.com/A/Cirno's-Perfect-Math-Class/single-expert
import scooby from './scooby.png';
<p><Image src={scooby} alt="scooby" height="240" /></p>
**Footswitches.**
This pattern usually shows up as repeated arrows (also known as "jacks").
However, instead of hitting them with the same foot, you can usually _switch_ off to the other foot on the same arrow.
For example, check out this excerpt from [Air Heroes ESP][5] below.
Like all the previous examples, this can be done completely by alternating.
[5]: https://ddr.stepcharts.com/2013/Air-Heroes/single-expert
import footswitch from './footswitch.png';
<p><Image src={footswitch} alt="footswitch" height="240" /></p>
There's all sorts of more advanced crossovers that build on top of these and probably even more that I haven't encountered yet.
If there's anything you think that's worth mentioning here, let me know in the comments at the bottom of this page!
Goals
---
One of the things I like about DDR is all the different ways there are to enjoy it.
At a high level, here are some I want to share here:
- Playing for high level clears
- Playing for high accuracy clears
- Playing for completion
- Playing for unlocks
- Playing courses
- Playing doubles
When I first started playing this game, all I knew was pushing skill level, and that meant trying to go for as high of a level as I could clear.
As I pushed outside of my comfort zone for clears, I would get more and more tired, but the thrill from clearing high level charts was well worth the exhaustion.
Since DDR is still at its core a rhythm game, an important aspect of stepping on arrows is how _accurate_ the timing of your steps are.
Steps are graded on a scale from
<span style="color:#C70039">MISS</span>,
<span style="color:#003399">GOOD</span>,
<span style="color:#336600">GREAT</span>,
<span style="color:#FFCC33; font-weight: bold; text-shadow:0px 0px 1px black">PERFECT</span>, and
<span style="color:#FFFFCC; font-weight: bold; text-shadow:0px 0px 1px black">MARVELOUS</span>,
with a smaller hit window for the higher judgements.
Lamps are awarded for clearing charts with no lower judgements:
- No MISS = Full Combo (blue lamp)
- No GOOD or lower = Great Full Combo (green lamp)
- No GREAT or lower = Perfect Full Combo (gold lamp)
- No PERFECT or lower = Marvelous Full Combo (white lamp)
The coveted Marvelous Full Combo, or MFC for short, means you cleared the chart with a perfect score of 1,000,000.
The best part about trying to achieve these lamps is that there is no difficulty minimum for this: you can achieve lamps on _any_ chart of _any_ difficulty.
Even though I've been playing around 16s lately, but for accuracy, I usually play around 9s to 11s in order to achieve high accuracy.
The judgement windows don't change, but if the BPM is slower and there's less gimmicks, then you can usually get yourself into a rhythm and hit the notes on time.
For example, I just recently achieved my first Perfect Full Combo (PFC) on a level 9:
![Perfect full combo](./pfc.jpeg)
Taking this a step further, not only are individual songs awarded lamps for completion, but entire folders are awarded lamps for the completion of all songs within it.
For example, to achieve the yellow clear lamp for the level 14 folder, I had to clear _all_ unlocked charts that were level 14:
![14 clear lamp](./14clear.jpeg)
This means that even if you get stuck at pushing high levels or high accuracy, you can still enjoy the game by clearing difficulties you already feel comfortable with while discovering new songs.
Playing for completion really pushed me to explore the songs I didn't already know in this game.
Through playing with the premium mode, you can also unlock songs during the extra stage (assuming you collect the 9 dots required to unlock the stage).
If you play a song enough, you can unlock it permanently.
There's also a course system where you can opt to play preset selections of songs in a row without stopping.
Personally, I haven't put too much time into these yet, since I'm still so busy with the others.
There's also **doubles play**, which is a lot of fun.
This involves playing on _both_ dance pads at the same time, using all 8 arrows.
Each song is specifically charted to use both pads in doubles play.
In this mode, there's a lot of new moves that usually involve a variety of different kinds of crossovers.
This is a good way to practice using crossovers more liberally, and just to get in some more cardio.
I usually get in at least a game of doubles or two whenever I get the chance to have the machine for myself for a bit.
Tools
---
[3icecream] is an indispensable tool when it comes to DDR.
It has the following features:
[3icecream]: https://3icecream.com
- Scrapes your scores from Konami's website and displays them in listings
- Provides links to Youtube videos for charts
- Calculates relative chart difficulties based on user statistics for charts of the same level
- Exports all scores as CSV
In particular, the [difficulty list][7] is really helpful at finding "easy" charts.
If I'm going for level 17 clears, I'd usually scroll down to the bottom of the 17 list and see what songs people have generally gotten better scores on.
Even if you don't use the score tracking feature, I'd still recommend using this listing.
[7]: https://3icecream.com/difficulty_list/15
[Stepcharts] is another website that I've used to sort through and view charts when I'm not in front of a cab.
It's also where I produced all the chart images for this blog post.
[Stepcharts]: https://ddr.stepcharts.com/
[Life4] is an unofficial ranking system for personal growth.
I personally use this religiously as a way to motivate my improvement.
So far, up to gold, it's been covering a wide variety of skills (like high level clears, repeated stamina sets, low level accuracy) at a wide variety of skill levels.
If you're not sure which direction to improve, consider putting yourself on this ladder and going for the next rank!
[Life4]: https://life4ddr.com/
Wrap Up
---
That's it!
I learned a lot about this game not just from playing but also reading and talking with some folks online and offline.
There's an incredible wealth of information across the internet about this game, having been around for more than two decades now.
While I'm certainly not even close to the end of my journey with DDR, I hope that you got to learn something from my experience.
If there's something I got wrong, or some resources or information you want me to add, please let me know in the comments.
**See you next time on the dance floor!**

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 478 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View file

@ -0,0 +1,61 @@
---
title: Coping with refactoring
date: 2024-06-21T05:56:50.594Z
tags:
- engineering
heroImage: ./ruinsHero.png
heroAlt: ruins
---
It is the inevitable nature of code to be refactored. How do we make it a less
painful process?
A not-horrible approach to creating a piece of software by first developing the
happy path, and then adding extra code to handle other cases. When we do this,
we may find that patterns emerge and some parts may be abstracted out to make
the code cleaner to read. This makes sense.
It seems that many engineers decided that this process of abstracting is too
painful and started using other people's abstractions pre-emptively in order to
avoid having to make a lot of code changes. They may introduce patterns like the
ones described in the GoF Design Patterns book.
Some abstractions may be simple to understand. But more often, they almost
always make the code longer and more complex. And sometimes, as a part of this
crystal ball future-proofing of the code, you may make a mistake :scream:. At
some point, you will have to change a lot more code than you would've had to if
you didn't start trying to make a complex design to begin with. It's the exact
same concept as the adage about [premature optimization][2].
[2]: https://en.wikipedia.org/wiki/Program_optimization
As an example, as a part of one of my previous jobs, I was reviewing code that
created _10+ classes_ that included strategy patterns and interfaces. The code
was meant to be generic over something that could be 1 of 4 possibilities. But
the 4 possibilities would basically never change. The entire setup could've been
replaced with a single file with a 4-part if/else statement.
I'm not saying that design patterns aren't useful. If we had more possibilities,
or needed to make it so that programmers outside our team had to be able to
introduce their own options, then we would have to rethink the design. But
changing an if statement in a single file is trivial. Changing 10+ files and all
the places that might've accidentally referenced them is not.
Some people think they can dodge the need to refactor by just piling more
abstractions on top, in a philosophy known as ["Open to extension, closed to
modification."][1] I think this is just a different, more expensive form of
refactoring. Increasing the number of objects just increases the amount of code
you need to change in the future should requirements or assumptions change.
[1]: https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle
So the next time you're thinking of introducing design patterns and creating a
boat load of files to hide your potential complexity into, consider whether the
cost of adding that abstraction is worth the pain it will take to change it
later.
> [!admonition: NOTE]
> As a bonus, if your language has a good enough type system, you probably don't
> need the strategy pattern at all. Just create a function signature and pass
functions as values!

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

View file

@ -0,0 +1,84 @@
---
title: "Agda syntax highlighting in my blog!"
slug: 2024-06-26-agda-rendering
date: 2024-06-27T04:25:15.332Z
tags: ["meta", "agda"]
---
I finally spent some time today getting full Agda syntax highlighting working on my blog.
This took me around 3-4 hours of debugging but I'm very happy with how it turned out.
First off, a demonstration. Here's the code for function application over a path:
```
open import Agda.Primitive
open import Prelude hiding (ap)
ap : {l1 l2 : Level} {A : Set l1} {B : Set l2} {x y : A}
→ (f : A → B)
→ (p : x ≡ y)
→ f x ≡ f y
ap {l1} {l2} {A} {B} {x} {y} f refl = refl
```
While editing this in my editor, I can issue the "Agda: Load" command to have Agda type-check it.
I can also work with typed holes interactively.
The crux of the effort boiled down to two things:
1. Writing the markdown processor
2. Building the Docker image
## Markdown processor
The markdown processor is a [single file of JavaScript][plugin] that runs during my builds.
It's a [remark] plugin, so you can plug it wherever else you want, as long as it uses remark.
[plugin]: https://git.mzhang.io/michael/blog/src/commit/2b4ca03563286d21d0ba4004ac36e2e2e7a259ed/plugin/remark-agda.ts
[remark]: https://github.com/remarkjs/remark
I basically just run `agda` using the HTML backend options according to [the documentation here][1].
Using the code method, I can have it leave the rest of the markdown file intact and only replace the agda code blocks with HTML.
It'll also spit out a CSS file used to highlight.
[1]: https://agda.readthedocs.io/en/v2.6.4.3-r1/tools/generating-html.html
One bit of complication here is that the generated file stripped off the front matter I needed to continue rendering this page with Astro.
There's a bit of code in that plugin file where I just order all of the agda blocks from the generated file and paste them back into the original.
Probably not the best way but it works.
The other bit of complication here is that the generated files need to go back into the public directory of my static site generator.
In Agda, applying `--html-highlight=code` applies this to _all_ files, including the dependency files (such as `Agda.Primitive.html`).
So I wrote a bit of glue code to have it also include the CSS file again.
Overall, not too complicated.
## Docker image
Okay, while writing this article I found [sourcedennis/agda-mini][2], but when I first started, I was having trouble finding a slim Agda distribution that was well maintained.
I looked into writing an Agda Dockerfile, but I also didn't want to maintain that either.
[2]: https://github.com/sourcedennis/docker-agda-mini
My approach was to use Nix to build the layered Docker image.
My derivation file lives [here][3].
Originally, I was considering using the version of Agda packaged in Nixpkgs.
Unfortunately, because of the GHC backend that Agda has, the Nixpkgs closure for Agda is approximately 3GB.
I didn't want a huge container image for my blog builder.
[3]: https://git.mzhang.io/michael/blog/src/commit/2b4ca03563286d21d0ba4004ac36e2e2e7a259ed/nix/docker-builder.nix
Instead, I use the Agda source directly, which is also packaged as a Nix package.
After this it's just the matter of including the rest of the party, which [pkgs.dockerTools][4] helped a lot with.
[4]: https://ryantm.github.io/nixpkgs/builders/images/dockertools/
---
That's it! Now my blog builds fully automatically in CI, with Agda syntax highlighting.
If you want to go back up to the top, click `ap` below :\)
```
_ : (n : ) → suc n ≡ suc n
_ = λ _ → ap suc refl
```

View file

@ -0,0 +1,416 @@
---
title: Boolean equivalences
slug: 2024-06-28-boolean-equivalences
date: 2024-06-28T21:37:04.299Z
tags: ["agda", "type-theory", "hott"]
---
This post is about a problem I recently proved that seemed simple going in, but
I struggled with it for quite a while. The problem statement is as follows:
Show that:
$$
(2 \simeq 2) \simeq 2
$$
> [!admonition: WARNING]
> :warning: This post describes exercise 2.13 of the [Homotopy Type Theory
> book][book]. If you are planning to attempt this problem yourself, spoilers
> ahead!
[book]: https://homotopytypetheory.org/book/
<small>The following line imports some of the definitions used in this post.</small>
```
open import Prelude
open Σ
```
## The problem explained
With just that notation, the problem may not be clear.
$2$ represents the set with only two elements in it, which is the booleans. The
elements of $2$ are $\mathsf{true}$ and $\mathsf{false}$.
```
data 𝟚 : Set where
true : 𝟚
false : 𝟚
```
In the problem statement, $\simeq$ denotes [_homotopy equivalence_][1] between
sets, which you can think of as basically a fancy isomorphism. For some
function $f : A \rightarrow B$, we say that $f$ is an equivalence[^equivalence]
if:
[^equivalence]: There are other ways to define equivalences. As we'll show, an important
property that is missed by this definition is that equivalences should be _mere
propositions_. The reason why this definition falls short of that requirement is
shown by Theorem 4.1.3 in the [HoTT book][book].
[1]: https://en.wikipedia.org/wiki/Homotopy#Homotopy_equivalence
- there exists a $g : B \rightarrow A$
- $f \circ g$ is homotopic to the identity map $\mathsf{id}_B : B \rightarrow B$
- $g \circ f$ is homotopic to the identity map $\mathsf{id}_A : A \rightarrow A$
We can write this in Agda like this:
```
record isEquiv {A B : Set} (f : A → B) : Set where
constructor mkEquiv
field
g : B → A
f∘g : (f ∘ g) id
g∘f : (g ∘ f) id
```
Then, the expression $2 \simeq 2$ simply expresses an equivalence between the
boolean set to itself. Here's an example of a function that satisfies this equivalence:
```
bool-id : 𝟚𝟚
bool-id b = b
```
We can prove that it satisfies the equivalence, by giving it an inverse function
(itself), and proving that the inverse is homotopic to identity (which is true
definitionally):
```
bool-id-eqv : isEquiv bool-id
bool-id-eqv = mkEquiv bool-id id-homotopy id-homotopy
where
id-homotopy : (bool-id ∘ bool-id) id
id-homotopy _ = refl
```
We can package these together into a single definition that combines the
function along with proof of its equivalence properties using a dependent
pair[^dependent-pair]:
[^dependent-pair]: A dependent pair (or $\Sigma$-type) is like a regular pair $\langle x, y\rangle$, but where $y$ can depend on $x$.
For example, $\langle x , \mathsf{isPrime}(x) \rangle$.
In this case it's useful since we can carry the equivalence information along with the function itself.
This type is rather core to Martin-Löf Type Theory, you can read more about it [here][dependent-pair].
[dependent-pair]: https://en.wikipedia.org/wiki/Dependent_type#%CE%A3_type
```
_≃_ : (A B : Set) → Set
A ≃ B = Σ (A → B) (λ f → isEquiv f)
```
<small>Hint: (click the $\Sigma$ to see how it's defined!)</small>
This gives us an equivalence:
```
bool-eqv : 𝟚𝟚
bool-eqv = bool-id , bool-id-eqv
```
Great! Now what?
## The space of equivalences
Turns out, the definition I gave above is just _one_ of multiple possible
equivalences between the boolean type and itself. Here is another possible
definition:
```
bool-neg : 𝟚𝟚
bool-neg true = false
bool-neg false = true
```
The remarkable thing about this is that nothing is changed by flipping true and
false. Someone using booleans would not be able to tell if you gave them a
boolean type where true and false were flipped. Simply put, the constructors
true and false are simply names and nothing can distinguish them.
We can show this by proving that `bool-neg` also forms an equivalence by using
itself as the inverse.
```
bool-neg-eqv : isEquiv bool-neg
bool-neg-eqv = mkEquiv bool-neg neg-homotopy neg-homotopy
where
neg-homotopy : (bool-neg ∘ bool-neg) id
neg-homotopy true = refl
neg-homotopy false = refl
bool-eqv2 : 𝟚𝟚
bool-eqv2 = bool-neg , bool-neg-eqv
```
You could imagine more complicated functions over booleans that may also exhibit
this equivalence property. So how many different elements of the type $2 \simeq
2$ exist?
## Proving $2 \simeq 2$ only has 2 inhabitants
It turns out there are only 2 equivalences, the 2 that we already found. But how
do we know this for a fact? Well, if there are only 2 equivalences, that means
there are only 2 inhabitants of the type $2 \simeq 2$. We can prove that this
type only has 2 inhabitants by establishing an equivalence between this type and
another type that is known to have 2 inhabitants, such as the boolean type!
That's where the main exercise statement comes in: "show that" $(2 \simeq 2)
\simeq 2$, or in other words, find an inhabitant of this type.
```
main-theorem : (𝟚𝟚) ≃ 𝟚
```
How do we do this? Well, remember what it means to define an equivalence: first
define a function, then show that it is an equivalence. Since equivalences are
symmetrical, it doesn't matter which way we start first. I'll choose to first
define a function $2 \rightarrow 2 \simeq 2$. This is fairly easy to do by
case-splitting:
```
f : 𝟚 → (𝟚𝟚)
f true = bool-eqv
f false = bool-eqv2
```
This maps `true` to the equivalence derived from the identity function, and
`false` to the function derived from the negation function.
Now we need another function in the other direction. We can't case-split on
functions, but we can certainly case-split on their output. Specifically, we can
differentiate `id` from `neg` by their behavior when being called on `true`:
- $\mathsf{id}(\mathsf{true}) :\equiv \mathsf{true}$
- $\mathsf{neg}(\mathsf{true}) :\equiv \mathsf{false}$
```
g : (𝟚𝟚) → 𝟚
g eqv = (fst eqv) true
-- same as this:
-- let (f , f-is-equiv) = eqv in f true
```
Hold on. If you know about the cardinality of functions, you'll observe that in
the space of functions $f : 2 \rightarrow 2$, there are $2^2 = 4$ equivalence
classes of functions. So if we mapped 4 functions into 2, how could this be
considered an equivalence?
A **key observation** here is that we're not mapping from all possible functions
$f : 2 \rightarrow 2$. We're mapping functions $f : 2 \simeq 2$ that have
already been proven to be equivalences. This means we can count on them to be
structure-preserving, and can rule out cases like the function that maps
everything to true, since it can't possibly have an inverse.
We'll come back to this later.
First, let's show that $g \circ f \sim \mathsf{id}$. This one is easy, we can
just case-split. Each of the cases reduces to something that is definitionally
equal, so we can use `refl`.
```
g∘f : (g ∘ f) id
g∘f true = refl
g∘f false = refl
```
Now comes the complicated case: proving $f \circ g \sim \mathsf{id}$.
> [!admonition: NOTE]
> Since Agda's comment syntax is `--`, the horizontal lines in the code below
> are simply a visual way of separating out our proof premises from our proof
> goals.
```
module f∘g-case where
goal :
(eqv : 𝟚𝟚)
--------------------
→ f (g eqv) ≡ id eqv
f∘g : (f ∘ g) id
f∘g eqv = goal eqv
```
Now our goal is to show that for any equivalence $\mathsf{eqv} : 2 \simeq 2$,
applying $f ∘ g$ to it is the same as not doing anything. We can evaluate the
$g(\mathsf{eqv})$ a little bit to give us a more detailed goal:
```
goal2 :
(eqv : 𝟚𝟚)
--------------------------
→ f ((fst eqv) true) ≡ eqv
-- Solving goal with goal2, leaving us to prove goal2
goal eqv = goal2 eqv
```
The problem is if you think about equivalences as encoding some rich data about
a function, converting it to a boolean and back is sort of like shoving it into
a lower-resolution domain and then bringing it back. How can we prove that the
equivalence is still the same equivalence, and as a result proving that there
are only 2 possible equivalences?
Note that the function $f$ ultimately still only produces 2 values. That means
if we want to prove this, we can case-split on $f$'s output. In Agda, this uses
a syntax known as [with-abstraction]:
[with-abstraction]: https://agda.readthedocs.io/en/v2.6.4.3-r1/language/with-abstraction.html
```
goal3 :
(eqv : 𝟚𝟚) → (b : 𝟚) → (p : fst eqv true ≡ b)
------------------------------------------------
→ f b ≡ eqv
-- Solving goal2 with goal3, leaving us to prove goal3
goal2 eqv with b ← (fst eqv) true in p = goal3 eqv b p
```
We can now case-split on $b$, which is the output of calling $f$ on the
equivalence returned by $g$. This means that for the `true` case, we need to
show that $f(b) = \mathsf{booleqv}$ (which is based on `id`) is equivalent to
the equivalence that generated the `true`.
Let's start with the `id` case; we just need to show that for every equivalence
$e$ where running the equivalence function on `true` also returned `true`, $e
\equiv f(\mathsf{true})$.
Unfortunately, we don't know if this is true unless our equivalences are _mere
propositions_, meaning if two functions are identical, then so are their
equivalences.
$$
\mathsf{isProp}(P) :\equiv \prod_{x, y \, : \, P}(x \equiv y)
$$
<small>Definition 3.3.1 from the [HoTT book][book]</small>
Applying this to $\mathsf{isEquiv}(f)$, we get the property:
$$
\sum_{f : A → B} \left( \prod_{e_1, e_2 \, : \, \mathsf{isEquiv}(f)} e_1 \equiv e_2 \right)
$$
This proof is shown later in the book, so I will use it here directly without proof[^equiv-isProp]:
[^equiv-isProp]: This is shown in Theorem 4.2.13 in the [HoTT book][book].
I might write a separate post about that when I get there, it seems like an interesting proof as well!
```
postulate
equiv-isProp : {A B : Set}
→ (e1 e2 : A ≃ B) → (Σ.fst e1 ≡ Σ.fst e2)
-----------------------------------------
→ e1 ≡ e2
```
Now we're going to need our **key observation** that we made earlier, that
equivalences must not map both values to a single one.
This way, we can pin the behavior of the function on all inputs by just using
its behavior on `true`, since its output on `false` must be _different_.
We can use a proof that [$\mathsf{true} \not\equiv \mathsf{false}$][true-not-false] that I've shown previously.
[true-not-false]: https://mzhang.io/posts/proving-true-from-false/
```
true≢false : true ≢ false
-- read: true ≡ false → ⊥
true≢false p = bottom-generator p
where
map : 𝟚 → Set
map true =
map false = ⊥
bottom-generator : true ≡ false → ⊥
bottom-generator p = transport map p tt
```
Let's transform this into information about $f$'s output on different inputs:
```
observation : (f : 𝟚𝟚) → isEquiv f → f true ≢ f false
-- read: f true ≡ f false → ⊥
observation f (mkEquiv g f∘g g∘f) p =
let
-- Given p : f true ≡ f false
-- Proof strategy is to call g on it to get (g ∘ f) true ≡ (g ∘ f) false
-- Then, use our equivalence properties to reduce it to true ≡ false
-- Then, apply the lemma true≢false we just proved
step1 = ap g p
step2 = sym (g∘f true) ∙ step1 ∙ (g∘f false)
step3 = true≢false step2
in step3
```
For convenience, let's rewrite this so that it takes in an arbitrary $b$ and
returns the version of the inequality we want:
```
observation' : (f : 𝟚𝟚) → isEquiv f → (b : 𝟚) → f b ≢ f (bool-neg b)
-- ^^^^^^^^^^^^^^^^^^^^
observation' f eqv true p = observation f eqv p
observation' f eqv false p = observation f eqv (sym p)
```
Then, solving the `true` case is simply a matter of using function
extensionality (functions are equal if they are point-wise equal) to show that
just the function part of the equivalences are identical, and then using this
property to prove that the equivalences must be identical as well.
```
goal3 eqv true p = equiv-isProp bool-eqv eqv functions-equal
where
e = fst eqv
pointwise-equal : (x : 𝟚) → e x ≡ x
pointwise-equal true = p
pointwise-equal false with ef ← e false in eq = helper ef eq
where
helper : (ef : 𝟚) → e false ≡ ef → ef ≡ false
helper true eq = rec-⊥ (observation' e (snd eqv) false (eq ∙ sym p))
helper false eq = refl
-- NOTE: fst bool-eqv = id, definitionally
functions-equal : id ≡ e
functions-equal = sym (funExt pointwise-equal)
```
The `false` case is proved similarly.
```
goal3 eqv false p = equiv-isProp bool-eqv2 eqv functions-equal
where
e = fst eqv
pointwise-equal : (x : 𝟚) → e x ≡ bool-neg x
pointwise-equal true = p
pointwise-equal false with ef ← e false in eq = helper ef eq
where
helper : (ef : 𝟚) → e false ≡ ef → ef ≡ true
helper true eq = refl
helper false eq = rec-⊥ (observation' e (snd eqv) true (p ∙ sym eq))
functions-equal : bool-neg ≡ e
functions-equal = sym (funExt pointwise-equal)
```
Putting this all together, we get the property we want to prove:
```
open f∘g-case
-- main-theorem : (𝟚𝟚) ≃ 𝟚
main-theorem = g , mkEquiv f g∘f f∘g
```
Now that Agda's all happy, our work here is done!
Going through all this taught me a lot about how the basics of equivalences work
and how to express a lot of different ideas into the type system. Thanks for
reading!

View file

@ -0,0 +1,152 @@
---
title: The lambda calculus
date: 2024-09-10T10:16:12.486Z
type: default
draft: true
tags: [pl-theory]
---
If you want to analyze something, one useful approach is to try to start with
some smaller, minimal version of it, making observations about it, and then
generalizing those observations.
One particularly useful tool for analyzing and developing programming languages
is the **lambda calculus**, an amazingly simple abstract machine that can
represent all [Turing-complete] computation.
Due to its simplicity, there are numerous ways to extend it. In fact, the
standard way for researchers to introduce new language features is to extend the
lambda calculus with their feature, and prove properties about it.
Not only is it useful as a reasoning tool, some compilers may use a variant of
lambda calculus as an intermediate language for compilation. GHC famously uses a
variant of System F as its Core language.
[Turing-complete]: https://en.wikipedia.org/wiki/Turing_completeness
In this blog post series, I'll introduce the lambda calculus in its most basic
form, introduce some conventions when it comes to defining languages like this,
writing an interpreter for it, and then writing some extensions.
## Abstract machines
Firstly, what is an abstract machine? We think of real machines as mechanical
beings, that behave according to some rules. If I punch in 1 + 1 into my
calculator, and hit enter, I expect it to return 2.
In reality, there's all sorts of reasons that calculation could possibly fail,
battery too low, the chip is overheating, [cosmic rays], just to name a few.
From a mathematical point of view, these are all uninteresting failure cases we
would rather just ignore.
[cosmic rays]: https://en.wikipedia.org/wiki/Single-event_upset
What we could do is distill _just_ the rules into some form we can reason about.
We could just simulate its execution in our heads and confirm that the rules we
created are nice.
## The lambda calculus
The lambda calculus is essentially an abstract machine where the only operation
that's defined is function call. The only operations you need are:
- Make a function (which we call a "lambda", or $\lambda$)
- Call a function
That's it. That's all there is to it.
Here's the identity function:
$$
\lambda x . x
$$
Here's a function that calls its first input on its second:
$$
\lambda f . (\lambda x . f (x))
$$
Taking in multiple arguments can be represented by just nesting lambdas.
## Representing numbers
With a language this simple, how can you possibly do anything with it? Can you
even do arithmetic?
It turns out the answer is yes, as long as you _encode_ numbers in a particular
way. One clever way is by using [Church encoding]. Here are some of the first
few natural numbers written with Church encoding:
[Church encoding]: https://en.wikipedia.org/wiki/Church_encoding
- 0 is $\lambda s . \lambda z . z$
- 1 is $\lambda s . \lambda z . s(z)$
- 2 is $\lambda s . \lambda z . s(s(z))$
- 3 is $\lambda s . \lambda z . s(s(s(z)))$
See the pattern? $s$ represents a "successor" function, and $z$ represents zero.
Note that what we pass in for $s$ and $z$ don't matter, as long as $s$ is a
function.
With this, we can imagine a "plus" operator that simply takes 2 numbers $a$ and
$b$, then calls $a$ (since it's a function) using $b$ as the "zero" value. Here it is:
$$
\mathsf{plus} :\equiv \lambda a . \lambda b . \lambda s . \lambda z . a(s)(b(s)(z))
$$
If this doesn't make sense, try plugging 2 and 3 from above and simplifying a
bit to convince yourself that this function works.
There's also ways to encode different things like rational numbers, booleans,
pairs, and lists. See the [Church encoding] Wikipedia page for more.
## Formal definition
Now that we have the intuition down, let's go on to a more formal definition.
The lambda calculus is a _language_. In this language we have terms that can get
evaluated. Terms are commonly described with a [grammar], which tell us how
terms in this language can be constructed. The grammar for the lambda calculus
is incredibly simple:
[grammar]: https://en.wikipedia.org/wiki/Formal_grammar
$$
\mathsf{Term} \; t ::= x \; | \; \lambda x . t \; | \; t(t)
$$
This says that there are only 3 possible terms:
1. $x$, which is our convention for some kind of variable or name
2. $\lambda x . t$, which creates a lambda function that takes an input called $x$ and returns another term $t$
3. $t(t)$ which is just function application, one term calling another
All of the terms above, including $\mathsf{plus}$, can be represented as a
combination of these constructors. Try it for yourself!
Great, but we can't really do anything if all we know is how to write down some
terms. We need some rules to tell us how to evaluate it. Something like this:
$$
(\lambda x . t_1) (t_2) \longmapsto \mathsf{subst}_{x \mapsto t_2} t_1
$$
Slightly inventing notation here, but essentially whenever we see a lambda being
applied to something, we should substitute the thing applied into the body of
the lambda everywhere we see that $x$. This isn't really a formal definition,
just kind of a vague descriptor. There's too many loose ends; what is
substitution? How do we keep track of what variables have which values? Where
does the result go? Let's go back and introduce a few more concepts to clean up
the formal definition.
### Contexts
### Substitution
### Values
### Judgments
### Rules

View file

@ -0,0 +1,64 @@
---
title: What's cool about Pijul?
date: 2024-09-10T22:37:03.684Z
tags: [engineering]
draft: true
---
Version control is one of those things that few of us hardly ever think about
until it causes problems. Lots of projects claim to be able to cover up the
internals and expose a glossy interface but once you really start using it you
notice all the pitfalls.
In particular, there seems to be this never-ending battle between the merge camp
and the rebase camp. Every project I join seems to have different opinions about
the process.
First of all, both have fundamental weaknesses. In the **merge** side, merge
commits lose information about the merge. Essentially a merge is two separate pieces of information:
- The commits that came before it
- The new final snapshot of the code
You can't commute changes made during a merge with different change, because the merge is simply just a new state with no origin information. Also, if some conflicts are missed during the resolution of the merge, they just go through.
On the other hand, the problem with **rebase** is that it fundamentally changes
history, which is a _huge_ problem. On your own feature branches, this might be
ok, but if you are actively working with someone and need to rebase against
main, you're essentially changing their work.
---
My understand is that the fundamental differences between [Pijul] (and Darcs, which it's based on) and git are:
[Pijul]: https://pijul.org/
1. Conflicts are a known state
1. The patches are not ordered into a merkle tree
At least these are the differences I care about. You'll note that the first one fixes the merge issue and the second one fixes the rebase issue.
With conflicts being recorded as a known state in the tree, we can add another commit on top that "fixes" the conflict. Since conflicts are deterministic, and each file's conflicts are recorded, you never have the problem of glossing over conflicts by accident.
Also, since patches don't depend on "previous" patches' hashes, this means
changing the order in which it lies in history doesn't need to modify the patch
itself, unlike in Git where moving a commit back or forth will modify the entire
future of the tree. Instead, dependencies are just tracked as metadata alongside the contents of the patch itself.
---
With all this said, what's _not_ cool about Pijul? Despite being close to a 1.0,
the Pijul software itself is still heavily lacking in documentation. For
example, the on-disk format and the push protocol are not specified in the
documentation. Also, the only real available forge is the proprietary Nest,
operated by the Pijul developers themselves. Nest was also incredibly
unintuitive and hard to use.
It's quite a shame because the fundamental structure of the underlying data does
impact the surface level operations of the tool quite a bit. Git has a lot of
clunky corners as we've discovered over the years, but it's so hard to work
around it because of the massive engineering effort involved.
At some point I'd love to pick apart the tool and see how it works and try to
write up some tooling, but I'm still sinking from SIFT work and PhD applications
right now, so I'll just stick to the sidelines.

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

View file

@ -0,0 +1,176 @@
---
title: The circle is a suspension over booleans
slug: 2024-09-15-circle-is-a-suspension-over-booleans
date: 2024-09-15T23:02:32.058Z
tags: [algebraic-topology, hott]
draft: true
---
<details>
<summary>Imports</summary>
```
{-# OPTIONS --cubical --allow-unsolved-metas #-}
module 2024-09-15-circle-is-a-suspension-over-booleans.index where
open import Cubical.Core.Primitives
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.GroupoidLaws
open import Cubical.Foundations.Isomorphism
open import Cubical.HITs.S1.Base
open import Data.Bool.Base hiding (_∧_; __)
open import Data.Nat.Base
```
</details>
One of the simpler yet still very interesting space in algebraic topology is the **circle**.
Analytically, a circle can described as the set of all points that satisfy:
$$
x^2 + y^2 = 1
$$
Well, there are a lot more circles, like $x^2 + y^2 = 2$, and so on, but in topology land we don't really care.
All of them are equivalent up to continuous deformation.
What is interesting, though, is the fact that it can _not_ be deformed down to a single point at the origin.
It has the giant hole in the middle.
The circle, usually denoted $S^1$, is a special case of $n$-spheres.
For some dimension $n \in \mathbb{N}$, the $n$-sphere can be defined analytically as:
$$
\lVert \bm{x} \rVert_2 = \bm{1}
$$
where $\lVert \bm{x} \rVert_2$ is the [Euclidean norm][1] of a point $\bm{x}$.
[1]: https://en.wikipedia.org/wiki/Norm_(mathematics)
However, in the synthetic world, circles look a lot different.
We ditch the coordinate system and boil the circle down to its core components in terms of points and paths.
The 1-sphere $S^1$ is defined with:
$$
\begin{align*}
\mathsf{base} &: S^1 \\
\mathsf{loop} &: \mathsf{base} \equiv \mathsf{base}
\end{align*}
$$
What about the 2-sphere, aka what we normally think of as a sphere?
We can technically define it as a 2-path over the base point:
$$
\begin{align*}
\mathsf{base} &: S^2 \\
\mathsf{surf} &: \mathsf{refl}_\mathsf{base} \equiv_{\mathsf{base} \equiv \mathsf{base}} \mathsf{refl}_\mathsf{base}
\end{align*}
$$
It would be nice to have an iterative definition of spheres; one that doesn't rely on us using our intuition to form new ones.
Ideally, it would be a function $S^n : \mathbb{N} \rightarrow \mathcal{U}$, where we could plug in an $n$ of our choosing.
For an iterative definition, we'd like some kind of base case.
What's the base case of spheres?
What is a $0$-sphere?
If we take our original analytic definition of spheres and plug in $0$, we find out that this is just $| x | = 1$, which has two solutions: $-1$ and $1$.
The space of solutions is just a space with two elements!
In other words, the type of booleans is actually the $0$-sphere, $S^0$.
![](./1.png)
How about the iterative case? How can we take an $n$-sphere and get an $(n+1)$-sphere?
This is where we need something called a _suspension_.
Suspensions are a form of higher inductive type (HIT). It can be defined like this:
```
data Susp (A : Type) : Type where
north : Susp A
south : Susp A
merid : A → north ≡ south
```
Intuitively, you take the entirety of the type $A$, and imagine it as some kind of plane.
Then, add two points, $\mathsf{north}$ and $\mathsf{south}$, to the top and bottom respectively.
Now, each original point in $A$ defines a _path_ between $\mathsf{north}$ and $\mathsf{south}$.
![](./susp.png)
It may not be too obvious, but the circle $S^1$, can be represented as a suspension of the booleans $S^0$.
This is presented as Theorem 6.5.1 in the [HoTT book]. I'll go over it briefly here.
[hott book]: https://homotopytypetheory.org/book/
The main idea is to consider the paths generated by both the $\mathsf{true}$ and the $\mathsf{false}$ points to add up to a single $\mathsf{loop}$.
Here's a visual guide:
![](./boolsusp.png)
We can write functions going back and forth:
```
Σ2→S¹ : Susp Bool → S¹
Σ2→S¹ north = base -- map the north point to the base point
Σ2→S¹ south = base -- also map the south point to the base point
Σ2→S¹ (merid false i) = loop i -- for the path going through false, let's map this to the loop
Σ2→S¹ (merid true i) = base -- for the path going through true, let's map this to refl
S¹→Σ2 : S¹ → Susp Bool
S¹→Σ2 base = north
S¹→Σ2 (loop i) = (merid false ∙ sym (merid true)) i
```
Now, to finish showing the equivalence, we need to prove that these functions concatenate into the identity in both directions:
```
rightInv : section Σ2→S¹ S¹→Σ2
rightInv base = refl
rightInv (loop i) =
-- Trying to prove that Σ2→S¹ (S¹→Σ2 loop) ≡ loop
-- Σ2→S¹ (merid false ∙ sym (merid true)) ≡ loop
-- Σ2→S¹ (merid false) = loop
-- Σ2→S¹ (sym (merid true)) = refl_base
cong (λ p → p i) (
cong Σ2→S¹ (merid false ∙ sym (merid true)) ≡⟨ congFunct {x = merid false {! !}} Σ2→S¹ refl refl ⟩
loop ∙ refl ≡⟨ sym (rUnit loop) ⟩
loop ∎
)
leftInv : retract Σ2→S¹ S¹→Σ2
leftInv north = refl
leftInv south = merid true
leftInv (merid true i) j =
-- i0 = north, i1 = merid true j (j = i0 => north, j = i1 => south)
merid true (i ∧ j)
leftInv (merid false i) j =
-- j = i0 ⊢ (merid false ∙ (λ i₁ → merid true (~ i₁))) i
-- j = i1 ⊢ merid false i
-- i = i0 ⊢ north
-- i = i1 ⊢ merid true j
-- (i, j) = (i0, i0) = north
-- (i, j) = (i0, i1) = north
-- (i, j) = (i1, i0) = merid true i0 = north
-- (i, j) = (i1, i1) = merid true i1 = south
let f = λ k → λ where
(i = i0) → north
(i = i1) → merid true (j ~ k)
-- j = i0 ⊢ (merid false ∙ (λ i₁ → merid true (~ i₁))) i
(j = i0) → compPath-filler (merid false) (sym (merid true)) k i
(j = i1) → merid false i
in hcomp f (merid false i)
```
And this gives us our equivalence!
```
-- Σ2≃S¹ : Susp Bool ≃ S¹
-- Σ2≃S¹ = isoToEquiv (iso Σ2→S¹ S¹→Σ2 rightInv leftInv)
```
In fact, we can also show that the $2$-sphere is the suspension of the $1$-sphere.

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

View file

@ -0,0 +1,101 @@
---
title: CTF FAQ
date: 2024-09-18T00:49:55-05:00
tags: [ctf]
draft: true
---
There's a lot of foundational stuff that is sometimes not obvious to people who are just starting out in CTF.
### What does nc \<ip\> mean?
For example,
```
nc chal.examplectf.com 12345
```
This is a quick way of saying that a server is running at a specific IP on a specific port.
`nc` stands for netcat, which is a tool that lets you talk to a server through your command line.
> [!admonition: NOTE]
> This also means that the flag is hidden on the server, and will **not** appear in any of the files that have been distributed to you.
In reality, you'd probably only use `nc` to play around with it initially.
You'd probably want to move to using a script soon, because you are usually given a copy of the program to play around with locally.
With vanilla Python, you can open a [socket] and replicate the `nc` behavior with:
[socket]: https://docs.python.org/3/library/socket.html
```py
import socket
s = socket()
s.connect(("chal.examplectf.com", 12345))
```
Alternatively, use [pwntools]:
```py
from pwn import *
r = remote("chal.examplectf.com", 12345)
```
### Why do I get a binary and a C file?
The binary is often provided because the specific addresses are meaningful to construct an attack.
Sometimes a libc is also provided to help craft attacks that need specific libc addresses.
Using something like [pwntools] makes it easy to do exploration and perform automated searches for addresses through a binary locally very quickly, and then swap out the target to the server once you've found a method to crack the binary.
[pwntools]: https://pwntools.com
### I get some text file with values like N, e, c. What does this mean?
These are usually some values for working with a cryptosystem called [RSA].
[RSA]: https://en.wikipedia.org/wiki/RSA_(cryptosystem)
* $N$ is the modulus. It's part of the public key. It's calculated with $N = p \times q$, but usually $p$ and $q$ are left as private
* $e$ is the public exponent. It's also part of the public key
* $c$ is the ciphertext.
* $m$ (if included) is the message, but this is usually the thing you're looking for
Lots of CTFs will do modified versions of RSA as a puzzle.
> [!admonition: NOTE]
> If the $N$ is just given to you directly instead of in a script, just go ahead and chuck the $N$ into [factordb] to see if it's already been factored!
[factordb]: https://factordb.com/
### I need to quickly convert some data
[CyberChef] is my tool of choice when it comes to this stuff.
[cyberchef]: https://gchq.github.io/CyberChef/
### I have a file but I don't know what it is
Plug the file through the [file] command!
It'll tell you what kind of file it is, by looking at the first few numbers. For example:
[file]: https://en.wikipedia.org/wiki/File_(command)
```
$ file binary
binary: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=62bc37ea6fa41f79dc756cc63ece93d8c5499e89, for GNU/Linux 3.2.0, not stripped
```
This means it's an executable program.
Another useful tool is [binwalk], which tells you if there's other files packed in there.
This is sometimes useful for some forensics challenges.
[binwalk]: https://github.com/ReFirmLabs/binwalk
### I'm on Windows. What do?
Install [WSL] or [dual boot].
[WSL]: https://learn.microsoft.com/en-us/windows/wsl/about
[dual boot]: https://wiki.archlinux.org/title/Dual_boot_with_Windows

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 408 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB

View file

@ -0,0 +1,333 @@
---
title: Visual examples of hcomp
slug: 2024-09-18-hcomp
date: 2024-09-18T04:07:13-05:00
tags: [hott, cubical, agda, type-theory]
---
[**hcomp**][1] is a primitive operation in [cubical type theory][cubical] that completes the _lid_ of an incomplete cube, given the bottom face and some number of side faces.
[1]: https://agda.readthedocs.io/en/latest/language/cubical.html#homogeneous-composition
[cubical]: https://arxiv.org/abs/1611.02108
> [!admonition: NOTE]
> All the code samples in this document are written in _and_ checked for correctness by [Agda], a dependently typed programming language that implements cubical type theory.
> Many of the names and symbols in the names are clickable links that take you to where that symbol is defined.
> Below are all of the imports required for this page to work.
[agda]: https://agda.readthedocs.io/en/latest/getting-started/what-is-agda.html
<details>
<summary>Imports for Agda</summary>
```
{-# OPTIONS --cubical #-}
module 2024-09-18-hcomp.index where
open import Cubical.Foundations.Prelude hiding (isProp→isSet)
open import Cubical.Foundations.Equiv.Base
open import Cubical.Foundations.Isomorphism
open import Cubical.Core.Primitives
open import Cubical.HITs.Susp.Base
open import Cubical.HITs.S1.Base
open import Data.Bool.Base hiding (_∧_; __)
```
</details>
## Two-dimensional case
In two dimensions, $\mathsf{hcomp}$ can be understood as the double-composition operation.
Given three paths $p : x \equiv y$, $q : y \equiv z$ and $r : x \equiv w$, it will return a path representing the composed path $((\sim r) \cdot p \cdot q) : (w ≡ z)$.
"Single" composition (between two paths rather than three) is typically implemented as a double composition with the left leg as $\mathsf{refl}$.
Without the double composition, this looks like:
![](./2d.jpg)
```
path-comp : {A : Type} {x y z : A} → x ≡ y → y ≡ z → x ≡ z
path-comp {x = x} p q i =
let u = λ j → λ where
(i = i0) → x
(i = i1) → q j
in hcomp u (p i)
```
The `(i = i0)` case in this case is $\mathsf{refl}_x(j)$ which is definitionally equal to $x$.
## Example: $\mathsf{isProp}(A) \rightarrow \mathsf{isSet}(A)$
Suppose we want to prove that all [mere proposition]s ([h-level] 1) are [sets] ([h-level] 2).
This result exists in the [cubical library], but let's go over it here.
[sets]: https://ncatlab.org/nlab/show/h-set
[mere proposition]: https://ncatlab.org/nlab/show/mere+proposition
[h-level]: https://ncatlab.org/nlab/show/homotopy+level
[cubical library]: https://github.com/agda/cubical/
```
isProp→isSet : {A : Type} → isProp A → isSet A
isProp→isSet {A} f = goal where
goal : (x y : A) → (p q : x ≡ y) → p ≡ q
goal x y p q j i = -- ...
```
We're given a type $A$, some proof that it's a [mere proposition], let's call it $f : \mathsf{isProp}(A)$.
Now let's construct an hcomp. In a [set][sets], we'd want paths $p$ and $q$ between the same points $x$ and $y$ to be identified.
Suppose $p$ and $q$ operate over the same dimension, $i$.
If we want to find a path between $p$ and $q$, we'll want another dimension.
Let's call this $j$.
So essentially, the final goal is a square with these boundaries:
![](./goal.jpg)
Our goal is to fill this square in.
Well, what is a square but a missing face in a 3-dimensional cube?
Let's draw out what this cube looks like, and then have $\mathsf{hcomp}$ give us the top face.
Before getting started, let's familiarize ourselves with the dimensions we're working with:
* $i$ is the left-right dimension, the one that $p$ and $q$ work over
* $j$ is the dimension of our final path between $p \equiv q$.
Note that this is the first argument, because our top-level ask was $p \equiv q$.
* Let's introduce a dimension $k$ for doing our $\mathsf{hcomp}$
![](./cubeextend.jpg)
We can map both $p(i)$ and $q(i)$ down to a square that has $x$ on all corners and $\mathsf{refl}_x$ on all sides.
Let's start with the left and right faces.
The left is represented by the _face lattice_ $(i = \mathsf{i0})$.
This is a _sub-polyhedra_ of the main cube since it only represents one face.
The only description of this face is that it contains all the points such that the $i$ dimension is equal to $\mathsf{i0}$.
Similarly, the right face can be represented by $(i = \mathsf{i1})$.
To fill this face, we must find some value in $A$ that satisfies all the boundary conditions.
Fortunately, we're given something that can produce values in $A$: the proof $f : \mathsf{isProp}(A)$ that we are given!
$f$ tells us that $x$ and $y$ are the same, so $f(x, x) : x \equiv x$ and $f(x, y) : x \equiv y$.
This means we can define the left face as $f(x, x)(k)$ and the right face as $f(x, y)(k)$.
(Remember, $k$ is the direction going from bottom to top)
![](./sides.jpg)
We can write this down as a mapping:
* $(i = \mathsf{i0}) \mapsto f(x, x)(k)$
* $(i = \mathsf{i1}) \mapsto f(x, y)(k)$
The same logic applies to the front face $(j = \mathsf{i0})$ and back face $(j = \mathsf{i1})$.
We can use $\mathsf{isProp}$ to generate us some faces, except using $x$ and $p(i)$, or $x$ and $q(i)$ as the two endpoints.
![](./frontback.jpg)
Writing this down as code, we get:
* $(j = \mathsf{i0}) \mapsto f(x, p(i))(k)$
* $(j = \mathsf{i1}) \mapsto f(x, q(i))(k)$
This time, the $i$ dimension has the $\mathsf{refl}$ edges.
Since all the edges on the bottom face as $\mathsf{refl}$, we can just use the constant $\mathsf{refl}_{\mathsf{refl}_x}$ as the bottom face.
In cubical, this is the constant expression `x`.
Putting this all together, we can using $\mathsf{hcomp}$ to complete the top face $(k = \mathsf{i1})$ for us:
```
let u = λ k → λ where
(i = i0) → f x x k
(i = i1) → f x y k
(j = i0) → f x (p i) k
(j = i1) → f x (q i) k
in hcomp u x
```
Hooray! Agda is happy with this.
Let's move on to a more complicated example.
## Example: $\Sigma \mathbb{2} \simeq S^1$
Suspensions are an example of a higher inductive type.
It can be shown that spheres can be iteratively defined in terms of suspensions.
![](./numberline.jpg)
Since the $0$-sphere is just two points (solutions to $\| \bm{x} \|_2 = 1$ in 1 dimension), we represent this as $S^0 :\equiv 2$.
We can show that a suspension over this, $\Sigma 2$, is equivalent to the classic $1$-sphere, the circle $S^1$.
Let's state the lemma:
```
Σ2≃S¹ : Susp Bool ≃ S¹
```
Equivalences can be generated from isomorphisms:
```
Σ2≃S¹ = isoToEquiv (iso f g fg gf) where
```
In this model, we're going to define $f$ and $g$ by having both the north and south poles be squished into one side.
The choice of side is arbitrary, so I'll choose $\mathsf{true}$.
This way, $\mathsf{true}$ is suspended into the $\mathsf{refl}$ path, and $\mathsf{false}$ is suspended into the $\mathsf{loop}$.
Here's a picture of our function $f$:
![](./suspbool.jpg)
The left setup is presented in the traditional suspension layout, with meridians going through $\mathsf{true}$ and $\mathsf{false}$ while the right side "squishes" the north and south poles using $\mathsf{refl}$, while having the other path represent the $\mathsf{loop}$.
On the other side, $g$, we want to map back from $S^1$ into $\Sigma 2$.
We can map the $\mathsf{loop}$ back into $\mathsf{merid} \; \mathsf{false}$, but the types mismatch, since $\mathsf{loop} : \mathsf{base} \equiv \mathsf{base}$ but $\mathsf{merid} \; \mathsf{false} : \mathsf{north} \equiv \mathsf{south}$.
But since $\mathsf{merid} \; \mathsf{true}$ is $\mathsf{refl}$, we can just concatenate with its inverse to get the full path:
$$
\mathsf{merid} \; \mathsf{false} \cdot (\mathsf{merid} \; \mathsf{true})^{-1} : \mathsf{north} \equiv \mathsf{north}
$$
In Agda, we can write it like this:
<div class="halfSplit">
```
f : Susp Bool → S¹
f north = base
f south = base
f (merid true i) = base
f (merid false i) = loop i
```
```
g : S¹ → Susp Bool
g base = north
g (loop i) = (merid false ∙ sym (merid true)) i
```
</div>
Now, the fun part is to show the extra requirements that are needed to show that these two functions indeed form an isomorphism:
* $f(g(s)) \equiv s$
* $g(f(b)) \equiv b$
Let's show $f(g(s)) \equiv s$ by induction on $s$, which is of type $S^1$.
The $\mathsf{base}$ case is easily handled by $\mathsf{refl}$, since $f(g(\mathsf{base})) = f(\mathsf{north}) = \mathsf{base}$ definitionally by the reduction rules we gave above.
```
fg : (s : S¹) → f (g s) ≡ s
fg base = refl
```
The loop case is trickier. If we were using book HoTT, here's how we would solve this (this result is given in Lemma 6.5.1 of the HoTT book):
$$
\begin{align*}
\mathsf{ap}_f(\mathsf{ap}_g(\mathsf{loop})) &\equiv \mathsf{loop} \\
\mathsf{ap}_f(\mathsf{merid} \; \mathsf{false} \cdot (\mathsf{merid} \; \mathsf{true})^{-1}) &\equiv \mathsf{loop} \\
\mathsf{ap}_f(\mathsf{merid} \; \mathsf{false}) \cdot \mathsf{ap}_f (\mathsf{merid} \; \mathsf{true})^{-1} &\equiv \mathsf{loop} \\
\mathsf{loop} \cdot \mathsf{ap}_f (\mathsf{merid} \; \mathsf{true})^{-1} &\equiv \mathsf{loop} \\
\mathsf{loop} \cdot \mathsf{refl}^{-1} &\equiv \mathsf{loop} \\
\mathsf{loop} \cdot \mathsf{refl} &\equiv \mathsf{loop} \\
\mathsf{loop} &\equiv \mathsf{loop} \\
\end{align*}
$$
Between the second and third steps, I used functoriality of the $\mathsf{ap}$ operation (equation _(i)_ of Lemma 2.2.2 in the HoTT book).
How can we construct a cube to solve this? Like in the first example, let's start out by writing down the face we want to end up with:
![](./goal2.jpg)
We're looking for the path in the bottom-to-top $j$ dimension.
We would like to have $\mathsf{hcomp}$ give us this face.
This time, let's see the whole cube first and then pick apart how it was constructed.
![](./fg.jpg)
First of all, note that unrolling $f(g(\mathsf{loop} \; i))$ would give us $f((\mathsf{merid} \; \mathsf{false} \cdot (\mathsf{merid} \; \mathsf{true})^{-1}) \; i)$, which is the same as $f(\mathsf{merid} \; \mathsf{false} \; i) \cdot f(\mathsf{merid} \; \mathsf{true} \; (\sim i))$.
Since this is a composition, it deserves its own face of the cube, with our goal $f(g(\mathsf{loop} \; i))$ being the lid.
We can express this as an $\mathsf{hfill}$ operation, which is similar to $\mathsf{hcomp}$ in that it takes the same face arguments, but produces the contents of the face rather than just the upper lid.
For the rest of the faces, we can just fill in opposite sides until the cube is complete.
The Agda translation looks like this:
```
fg (loop i) j =
let
u = λ k → λ where
(i = i0) → base
(i = i1) → f (merid true (~ k))
(j = i0) →
let u = λ k' → λ where
(i = i0) → base
(i = i1) → f (merid true (~ k'))
in hfill u (inS (f (merid false i))) k
(j = i1) → loop i
in hcomp u (f (merid false i))
```
Nothing should be too surprising here, other than the use of a nested $\mathsf{hfill}$ which is needed to describe the face that contains the composition.
Now, to prove $g(f(b)) \equiv b$, where $b : \Sigma 2$. We can do induction on $b$.
Again, the point constructor cases are relatively simple.
```
gf : (b : Susp Bool) → g (f b) ≡ b
gf north = refl
gf south = merid true
```
For the meridian case $(b : 2) \rightarrow \mathsf{north} \equiv \mathsf{south}$, we can do $2$-induction on $b$.
First, for the $\mathsf{true}$ case, let's see the diagram of what we're trying to prove.
![](./goal3.jpg)
After simplifying all the expressions, we find that $g(f(\mathsf{merid} \; \mathsf{true} \; i)) = g(\mathsf{base}) = \mathsf{north}$.
So really, we're trying to prove that $\mathsf{north} \equiv \mathsf{merid} \; \mathsf{true} \; i$.
![](./goal4.jpg)
But wait, that's odd... there's a $\mathsf{south}$ in the top right corner. What can we do?
Well, we could actually use the _same_ path on the right as on the top.
We won't even need an $\mathsf{hcomp}$ for this, just a clever interval expression:
```
gf (merid true i) j = merid true (i ∧ j)
```
One last cube for the $g(f(\mathsf{merid} \; \mathsf{false} \; i)) \; j$ case.
This cube actually looks very similar to the $f(g(s))$ cube, although the points are in the suspension type rather than in $S^1$.
Once again we have a composition, so we will need an extra $\mathsf{hfill}$ to get us the front face of this cube.
![](./gf.jpg)
This cube cleanly extracts into the following Agda code.
```
gf (merid false i) j =
let
u = λ k → λ where
(i = i0) → north
(i = i1) → merid true (j ~ k)
(j = i0) →
let u = λ k' → λ where
(i = i0) → north
(i = i1) → merid true (~ k')
in hfill u (inS (merid false i)) k
(j = i1) → merid false i
in hcomp u (merid false i)
```
These are some of my takeaways from studying cubical type theory and $\mathsf{hcomp}$ these past few weeks.
One thing I'd have to note is that drawing all these cubes really does make me wish there was some kind of 3D visualizer for these cubes...
Anyway, that's all! See you next post :)
### References
* https://arxiv.org/abs/1611.02108
* https://agda.readthedocs.io/en/v2.7.0/language/cubical.html
* https://1lab.dev/1Lab.Path.html#box-filling
* https://github.com/martinescardo/HoTTEST-Summer-School/blob/main/Agda/Cubical/Lecture8-notes.lagda.md#homogeneous-composition-hcomp

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Some files were not shown because too many files have changed in this diff Show more