479 Commits

Author SHA1 Message Date
John Alanbrook
7d0c96f328 inline 2026-02-21 13:44:34 -06:00
John Alanbrook
700b640cf1 type propagation 2026-02-21 04:13:21 -06:00
John Alanbrook
74e0923629 don't drop type info from add 2026-02-21 03:38:20 -06:00
John Alanbrook
fea76ecac5 remove typed ops 2026-02-21 02:52:17 -06:00
John Alanbrook
ac707bc399 Merge branch 'better_disasm' into optimize_mcode 2026-02-21 01:28:59 -06:00
John Alanbrook
03b5fc1a5e more build tools 2026-02-21 01:21:53 -06:00
John Alanbrook
5caa5d1288 fix merge error 2026-02-21 01:21:26 -06:00
John Alanbrook
3ebd98fc00 Merge branch 'better_disasm' into optimize_mcode 2026-02-21 00:42:34 -06:00
John Alanbrook
ede033f52e disasm 2026-02-21 00:41:47 -06:00
John Alanbrook
f4ad851a3f Merge branch 'master' into optimize_mcode 2026-02-20 23:18:27 -06:00
John Alanbrook
50a2d67c90 fix stone crash 2026-02-20 23:18:19 -06:00
John Alanbrook
5ea0de9fbb crash vm trace 2026-02-20 23:14:43 -06:00
John Alanbrook
34cb19c357 fix gc closure shortening 2026-02-20 23:07:47 -06:00
John Alanbrook
26f63bccee Merge branch 'lexical_this' 2026-02-20 22:02:31 -06:00
John Alanbrook
93aaaa43a1 lexical this 2026-02-20 22:02:26 -06:00
John Alanbrook
1e606095a2 Merge branch 'better_fn_error' 2026-02-20 21:59:29 -06:00
John Alanbrook
652e8a19f0 better fn error 2026-02-20 21:59:17 -06:00
John Alanbrook
bcc9bdac83 Merge branch 'improve_str_concat' into optimize_mcode 2026-02-20 21:55:22 -06:00
John Alanbrook
f46784d884 optimize 2026-02-20 21:55:12 -06:00
John Alanbrook
fca1041e52 str stone; concat 2026-02-20 21:54:19 -06:00
John Alanbrook
e73152bc36 better error when use without tet 2026-02-20 21:49:22 -06:00
John Alanbrook
6f932cd3c3 Merge branch 'master' into optimize_mcode 2026-02-20 21:24:22 -06:00
John Alanbrook
55fa250f12 quicker fn call 2026-02-20 21:24:18 -06:00
John Alanbrook
374069ae99 Merge branch 'imp_audit' 2026-02-20 20:51:34 -06:00
John Alanbrook
3e4f3dff11 closure get type back inference 2026-02-20 20:51:21 -06:00
John Alanbrook
6cac591dc9 move fold 2026-02-20 20:32:28 -06:00
John Alanbrook
f6dab3c081 fix incorrect code elimination 2026-02-20 20:29:09 -06:00
John Alanbrook
a82c13170f Merge branch 'imp_audit' 2026-02-20 20:03:19 -06:00
John Alanbrook
3e7b3e9994 emit warnings for unused vars 2026-02-20 20:03:12 -06:00
John Alanbrook
68ae10bed0 fix resolving c symbols in C 2026-02-20 20:01:26 -06:00
John Alanbrook
cf3c2c9c5f reduce mcode 2026-02-20 19:53:29 -06:00
John Alanbrook
bb8536f6c9 better error print 2026-02-20 19:14:45 -06:00
John Alanbrook
0e86d6f5a1 debug output on build 2026-02-20 19:06:03 -06:00
John Alanbrook
d8df467eae Merge branch 'fix_find' 2026-02-20 18:52:35 -06:00
John Alanbrook
995d900e6e fix find usage 2026-02-20 18:52:02 -06:00
John Alanbrook
eeb2f038a1 fix scheduler memory error on exit 2026-02-20 18:42:21 -06:00
John Alanbrook
ce2e11429f fix build regression 2026-02-20 18:09:19 -06:00
John Alanbrook
55fae8b5d0 Merge branch 'master' into quicken_build 2026-02-20 18:00:18 -06:00
John Alanbrook
c722b9d648 src bsaed dylib 2026-02-20 17:53:25 -06:00
John Alanbrook
680b257a44 fingerprint hash 2026-02-20 17:52:25 -06:00
John Alanbrook
611d538e9f json correct args 2026-02-20 17:04:14 -06:00
John Alanbrook
844ca0b8d5 engine os.print 2026-02-20 16:40:03 -06:00
John Alanbrook
ec404205ca Merge branch 'fix_compile_warnings' 2026-02-20 15:40:36 -06:00
John Alanbrook
9ebe6efe2b regenerated boot files 2026-02-20 15:40:27 -06:00
John Alanbrook
e455159b2d Merge branch 'fix_compile_warnings' 2026-02-20 15:34:50 -06:00
John Alanbrook
5af76bce9b rm print 2026-02-20 15:33:46 -06:00
John Alanbrook
5e413e8d81 Merge branch 'audit_dups' 2026-02-20 15:07:58 -06:00
John Alanbrook
8fe14f9a42 Merge branch 'improve_compile_error' into audit_dups 2026-02-20 15:07:51 -06:00
John Alanbrook
148cf12787 fixed logging and remove 2026-02-20 15:03:58 -06:00
John Alanbrook
9c35e77f3f Merge branch 'improve_fetch' into audit_dups 2026-02-20 15:03:30 -06:00
John Alanbrook
882a3ae8cb centralized ensure dir 2026-02-20 15:02:16 -06:00
John Alanbrook
35d0890242 improved fetch 2026-02-20 15:00:08 -06:00
John Alanbrook
98515e9218 Merge branch 'fix_compile_warnings' 2026-02-20 14:51:53 -06:00
John Alanbrook
11fb213a74 fix gc backtrace 2026-02-20 14:51:42 -06:00
John Alanbrook
4ac92c8a87 reduce dups 2026-02-20 14:44:48 -06:00
John Alanbrook
ed69d53573 Merge branch 'improve_fetch' into audit_dups 2026-02-20 14:43:26 -06:00
John Alanbrook
e4588e43f2 Merge branch 'improve_compile_error' 2026-02-20 14:39:51 -06:00
John Alanbrook
2f41f58521 update docs for compile chain 2026-02-20 14:35:48 -06:00
John Alanbrook
06866bcc0a Merge branch 'fix_compile_warnings' 2026-02-20 14:21:10 -06:00
John Alanbrook
f20fbedeea remove js_malloc from public 2026-02-20 14:21:07 -06:00
John Alanbrook
285395807b core packages now split out 2026-02-20 14:14:07 -06:00
John Alanbrook
601a78b3c7 package resolution 2026-02-20 14:10:24 -06:00
John Alanbrook
ebfc89e072 zero copy blob 2026-02-20 14:06:42 -06:00
John Alanbrook
f0c2486a5c better path resolution 2026-02-20 13:39:26 -06:00
John Alanbrook
c5ad4f0a99 thin out quickjs-internal 2026-02-20 13:08:39 -06:00
John Alanbrook
e6d05abd03 harsher compile error 2026-02-20 12:52:40 -06:00
John Alanbrook
c0aff9e9bf fix compiler warnings 2026-02-20 12:44:18 -06:00
John Alanbrook
8d449e6fc6 better compiler warnings adn errors 2026-02-20 12:40:49 -06:00
John Alanbrook
54e5be0773 update git 2026-02-20 08:48:34 -06:00
John Alanbrook
38f368c6d6 faster startup and fix asan error 2026-02-19 03:47:30 -06:00
John Alanbrook
06ad466b1a import graph 2026-02-19 03:19:24 -06:00
John Alanbrook
bab4d50b2a shorten frames to closure vars only on gc 2026-02-19 01:55:35 -06:00
John Alanbrook
e7fec94e38 Merge branch 'fix_aot' 2026-02-19 01:38:13 -06:00
John Alanbrook
ab43ab0d2c aot fix 2026-02-19 01:37:54 -06:00
John Alanbrook
a15844af58 Merge branch 'fix_aot' 2026-02-19 01:23:58 -06:00
John Alanbrook
85ef711229 fixes 2026-02-19 01:23:41 -06:00
John Alanbrook
ddfb0b1345 Merge branch 'fix_aot' 2026-02-19 00:47:41 -06:00
John Alanbrook
3f206d80dd jscode 2026-02-19 00:47:34 -06:00
John Alanbrook
3e0dc14318 use globfs 2026-02-19 00:43:06 -06:00
John Alanbrook
19132c1517 jscode 2026-02-19 00:33:16 -06:00
John Alanbrook
e59bfe19f7 Merge branch 'master' into fix_aot 2026-02-18 23:55:17 -06:00
John Alanbrook
e004b2c472 optimize frames; remove trampoline 2026-02-18 22:37:48 -06:00
John Alanbrook
27ca008f18 lower ops directly 2026-02-18 21:18:18 -06:00
John Alanbrook
a05d0e2525 better streamline 2026-02-18 20:56:15 -06:00
John Alanbrook
777474ab4f updated docs for dylib paths 2026-02-18 20:30:54 -06:00
John Alanbrook
621da78de9 faster aot 2026-02-18 20:24:12 -06:00
John Alanbrook
e2c26737f4 Merge branch 'dylib_cache' 2026-02-18 19:42:04 -06:00
John Alanbrook
02eb58772c fix build hangs 2026-02-18 19:41:59 -06:00
John Alanbrook
14a94aff12 Merge branch 'dylib_cache' 2026-02-18 19:27:33 -06:00
John Alanbrook
6bc9dd53a7 better cache handling 2026-02-18 19:27:28 -06:00
John Alanbrook
f7499c4f60 log reentrancy guard 2026-02-18 19:26:06 -06:00
John Alanbrook
fa5c0416fb correct log line blames 2026-02-18 18:47:46 -06:00
John Alanbrook
dc70a15981 add guards for root cycles 2026-02-18 18:27:12 -06:00
John Alanbrook
94fe47b472 Merge branch 'cleanup_thinc' 2026-02-18 18:00:34 -06:00
John Alanbrook
34521e44f1 jstext properly used for oncat 2026-02-18 17:58:36 -06:00
John Alanbrook
81561d426b use unstone jstext for string creation 2026-02-18 17:39:22 -06:00
John Alanbrook
7a4c72025f Merge branch 'master' into fix_aot 2026-02-18 17:03:39 -06:00
John Alanbrook
303f894a70 js helpers for migrating 2026-02-18 16:59:42 -06:00
John Alanbrook
c33c35de87 aot pass all tests 2026-02-18 16:53:33 -06:00
John Alanbrook
417eec2419 Merge branch 'simplify_disruption' 2026-02-18 16:50:09 -06:00
John Alanbrook
c0cd6a61a6 disruption 2026-02-18 16:47:33 -06:00
John Alanbrook
42dc7243f3 fix JS_ToNumber 2026-02-18 14:20:42 -06:00
John Alanbrook
469b7ac478 Merge branch 'master' into fix_aot 2026-02-18 14:16:42 -06:00
John Alanbrook
4872c62704 fix JS_ToNumber 2026-02-18 14:14:44 -06:00
John Alanbrook
91b73f923a Merge branch 'fix_heap_closure' 2026-02-18 12:46:18 -06:00
John Alanbrook
4868a50085 fix compilation error 2026-02-18 12:46:07 -06:00
John Alanbrook
6f8cad9bb2 Merge branch 'improved_log' 2026-02-18 12:22:45 -06:00
John Alanbrook
a1d1e721b6 stack trace in logging toml 2026-02-18 12:22:33 -06:00
John Alanbrook
dc7f933424 add help info to --help 2026-02-18 12:07:17 -06:00
John Alanbrook
36f054d99d Merge branch 'improved_log' 2026-02-18 12:05:10 -06:00
John Alanbrook
037fdbfd2c log stack traces 2026-02-18 12:05:05 -06:00
John Alanbrook
28f5a108d8 improved help 2026-02-18 12:00:46 -06:00
John Alanbrook
d8422ae69b Merge branch 'sem_grab' 2026-02-18 11:00:54 -06:00
John Alanbrook
187d7e9832 correct this handling 2026-02-18 11:00:51 -06:00
John Alanbrook
4aafb3c5e9 Merge branch 'improved_log' 2026-02-18 10:49:33 -06:00
John Alanbrook
4b635228f9 fix build/dl loading; use core from anywhere 2026-02-18 10:49:27 -06:00
John Alanbrook
42f7c270e1 Merge branch 'sem_grab' 2026-02-18 10:49:00 -06:00
John Alanbrook
bd7f9f34ec simplify compilation requestors 2026-02-18 10:46:47 -06:00
John Alanbrook
22c0b421d2 Merge branch 'sem_grab' 2026-02-18 10:35:25 -06:00
John Alanbrook
8be5936c10 better sem analysis 2026-02-18 10:34:47 -06:00
John Alanbrook
bd53089578 doc stop, json, log 2026-02-18 10:18:28 -06:00
John Alanbrook
76c482b84e improved logging 2026-02-18 10:16:01 -06:00
John Alanbrook
b16fa75706 flag used for actor stopping insetad of counter 2026-02-17 17:59:12 -06:00
John Alanbrook
ad419797b4 native function type 2026-02-17 17:40:44 -06:00
John Alanbrook
5ee51198a7 kill actor when abusive 2026-02-17 17:34:25 -06:00
John Alanbrook
2df45b2acb Merge branch 'json_gc_fix' 2026-02-17 15:59:51 -06:00
John Alanbrook
933c63caf8 Merge branch 'root_gc' 2026-02-17 15:59:43 -06:00
John Alanbrook
dc422932d3 Merge branch 'heap_blob' 2026-02-17 15:59:38 -06:00
John Alanbrook
b25285f2e1 Merge branch 'master' into fix_aot 2026-02-17 15:48:54 -06:00
John Alanbrook
b3573dbf26 native flag 2026-02-17 15:48:49 -06:00
John Alanbrook
8a24a69120 fixes to allow native to work - should revert when recursion is fixed 2026-02-17 15:44:17 -06:00
John Alanbrook
56ac53637b heap blobs 2026-02-17 15:41:53 -06:00
John Alanbrook
5415726e33 actors use hdiden symbol now 2026-02-17 14:35:54 -06:00
John Alanbrook
56cb1fb4c6 package now returns C modules 2026-02-17 14:33:06 -06:00
John Alanbrook
278d685c8f Merge branch 'json_gc_fix' 2026-02-17 14:00:28 -06:00
John Alanbrook
51815b66d8 json rooting fix 2026-02-17 14:00:23 -06:00
John Alanbrook
78051e24f3 bench now compares aot 2026-02-17 13:42:36 -06:00
John Alanbrook
c02fbbd9e0 tooling improvements 2026-02-17 13:37:17 -06:00
John Alanbrook
ad26e71ad1 fix push array on itself 2026-02-17 13:27:08 -06:00
John Alanbrook
2e78e7e0b8 Merge branch 'bench_endoders' 2026-02-17 12:36:15 -06:00
John Alanbrook
8f9eb0aaa9 benchmark encoders and speed them up 2026-02-17 12:36:07 -06:00
John Alanbrook
0965aed0ef Merge branch 'fix_actors' 2026-02-17 12:35:26 -06:00
John Alanbrook
1b00fd1f0a Merge branch 'fix_native_suite' 2026-02-17 12:35:20 -06:00
John Alanbrook
3bf63780fd Merge branch 'core_integration' into fix_imports 2026-02-17 12:34:47 -06:00
John Alanbrook
f7f26a1f00 fix building 2026-02-17 12:32:09 -06:00
John Alanbrook
4c9db198db fix string hash bug 2026-02-17 12:26:52 -06:00
John Alanbrook
eff3548c50 bootstrap now uses streamline 2026-02-17 12:23:59 -06:00
John Alanbrook
4fc48fd6f2 Merge branch 'quicken_native' into fix_native_suite 2026-02-17 12:22:42 -06:00
John Alanbrook
3f6388ff4e far smaller assmbly 2026-02-17 11:53:46 -06:00
John Alanbrook
2be2b15a61 update actor doc and add more actor based tests 2026-02-17 11:50:46 -06:00
John Alanbrook
12b6c3544e fix all core script syntax issues 2026-02-17 11:23:12 -06:00
John Alanbrook
570f0cdc83 add qbe config to copmile 2026-02-17 11:17:59 -06:00
John Alanbrook
cc82fcb7d9 Merge branch 'quicken_native' into fix_native_suite 2026-02-17 11:12:59 -06:00
John Alanbrook
5ef3381fff native aot suite passes 2026-02-17 11:12:51 -06:00
John Alanbrook
5fcf765c8d parallel assembly 2026-02-17 10:57:50 -06:00
John Alanbrook
027c1549fc recursive add and install 2026-02-17 10:52:36 -06:00
John Alanbrook
8c408a4b81 qbe in native build 2026-02-17 10:23:47 -06:00
John Alanbrook
2d054fcf21 fix package 2026-02-17 10:11:02 -06:00
John Alanbrook
857f099a68 Merge branch 'cell_lsp' 2026-02-17 09:01:28 -06:00
John Alanbrook
e0b6c69bfe build fix 2026-02-17 09:01:24 -06:00
John Alanbrook
2cef766b0a Merge branch 'gen_dylib' 2026-02-17 08:59:05 -06:00
John Alanbrook
a3ecb0ad05 Merge branch 'fix_actors' 2026-02-17 08:53:57 -06:00
John Alanbrook
2a38292ff7 fix actor working 2026-02-17 08:53:16 -06:00
John Alanbrook
9e42a28d55 aot compile vm_suite 2026-02-17 03:33:21 -06:00
John Alanbrook
4d4d50a905 fix claude.md 2026-02-17 03:10:45 -06:00
John Alanbrook
08515389d2 fix cell toml and add documentation for tools 2026-02-17 02:36:53 -06:00
John Alanbrook
fbdfbc1200 add audit 2026-02-17 01:54:25 -06:00
John Alanbrook
d975214ba6 Merge branch 'master' into cell_lsp 2026-02-17 01:20:11 -06:00
John Alanbrook
3c28dc2c30 fix toml issue / isobject 2026-02-17 01:19:43 -06:00
John Alanbrook
2633fb986f improved semantic indexing 2026-02-17 01:08:10 -06:00
John Alanbrook
400c58e5f2 fix build 2026-02-17 01:04:42 -06:00
John Alanbrook
bd4714a732 Merge branch 'cell_lsp' 2026-02-17 00:28:17 -06:00
John Alanbrook
0ac575db85 fix package bug, improve stack trace 2026-02-17 00:28:10 -06:00
John Alanbrook
41f373981d add docs to website nav 2026-02-17 00:04:55 -06:00
John Alanbrook
c9dad91ea1 fix intrinsics and env 2026-02-16 23:05:00 -06:00
John Alanbrook
63955e45ff Merge branch 'master' into gen_dylib 2026-02-16 22:07:56 -06:00
John Alanbrook
4b7cde9400 progress on aot 2026-02-16 21:58:45 -06:00
John Alanbrook
8a19cffe9f Merge branch 'pit_lsp' into fix_libs 2026-02-16 21:53:11 -06:00
John Alanbrook
6315574a45 Merge branch 'fix_imports' into fix_libs 2026-02-16 21:53:00 -06:00
John Alanbrook
2051677679 better errors 2026-02-16 21:52:11 -06:00
John Alanbrook
d398ab8db0 lsp explain and index 2026-02-16 21:50:39 -06:00
John Alanbrook
e7b599e3ac add shop documentation and fix shop remove 2026-02-16 19:55:22 -06:00
John Alanbrook
1f3e53587d log available 2026-02-16 19:51:00 -06:00
John Alanbrook
dce0b5cc89 remove random str for imported 2026-02-16 19:13:37 -06:00
John Alanbrook
ce387d18d5 Merge branch 'fix_toml' into fix_libs 2026-02-16 19:10:53 -06:00
John Alanbrook
c02945e236 document functino nr args 2026-02-16 18:53:53 -06:00
John Alanbrook
8e198d9822 fix toml escape 2026-02-16 18:53:11 -06:00
John Alanbrook
17e35f023f fix building C 2026-02-16 18:47:43 -06:00
John Alanbrook
5a7169654a fixed pipeline module loading; better parse errors for function literals in objects 2026-02-16 18:41:35 -06:00
John Alanbrook
a1ee7dd458 better json pretty print 2026-02-16 17:00:06 -06:00
John Alanbrook
9dbe699033 better make 2026-02-16 01:45:00 -06:00
John Alanbrook
f809cb05f0 Merge branch 'fix_core_scripts' into quicken_mcode 2026-02-16 01:43:08 -06:00
John Alanbrook
788ea98651 bootstrap init 2026-02-16 01:36:36 -06:00
John Alanbrook
433ce8a86e update actors 2026-02-16 01:35:07 -06:00
John Alanbrook
cd6e357b6e Merge branch 'quicken_mcode' into gen_dylib 2026-02-16 00:35:40 -06:00
John Alanbrook
f4f56ed470 run dylibs 2026-02-16 00:35:23 -06:00
John Alanbrook
ff61ab1f50 better streamline 2026-02-16 00:34:49 -06:00
John Alanbrook
46c345d34e cache invalidation 2026-02-16 00:04:30 -06:00
John Alanbrook
dc440587ff pretty json 2026-02-15 22:55:11 -06:00
John Alanbrook
8f92870141 correct syntax errors in core scripts 2026-02-15 22:23:04 -06:00
John Alanbrook
7fc4a205f6 go reuses frames 2026-02-15 19:45:17 -06:00
John Alanbrook
23b201bdd7 dynamic dispatch 2026-02-15 17:51:07 -06:00
John Alanbrook
913ec9afb1 Merge branch 'audit_gc' into fix_slots 2026-02-15 15:44:28 -06:00
John Alanbrook
56de0ce803 fix infinite loop in shop 2026-02-15 15:41:09 -06:00
John Alanbrook
96bbb9e4c8 idompent 2026-02-15 14:58:46 -06:00
John Alanbrook
ebd624b772 fixing gc bugs; nearly idempotent 2026-02-15 13:14:26 -06:00
John Alanbrook
7de20b39da more detail on broken pipeline and vm suit tests 2026-02-15 11:51:23 -06:00
John Alanbrook
ee646db394 failsafe boot mode 2026-02-15 11:44:33 -06:00
John Alanbrook
ff80e0d30d Merge branch 'fix_gc' into pitweb 2026-02-15 10:04:54 -06:00
John Alanbrook
d9f41db891 fix syntax errors in build 2026-02-15 09:29:07 -06:00
John Alanbrook
860632e0fa update cli docs and fix cli scripts with new syntax 2026-02-14 22:24:32 -06:00
John Alanbrook
dcc9659e6b Merge branch 'runtime_rework' into fix_gc 2026-02-14 22:11:31 -06:00
John Alanbrook
2f7f2233b8 compiling 2026-02-14 22:08:55 -06:00
John Alanbrook
eee06009b9 no more special case for core C 2026-02-14 22:00:12 -06:00
John Alanbrook
a765872017 remove if/else dispatch from compile chain 2026-02-14 17:57:48 -06:00
John Alanbrook
a93218e1ff faster streamline 2026-02-14 17:14:43 -06:00
John Alanbrook
f2c4fa2f2b remove redundant check 2026-02-14 16:49:16 -06:00
John Alanbrook
5fe05c60d3 faster gc 2026-02-14 16:46:11 -06:00
John Alanbrook
e75596ce30 respsect array and object length requests 2026-02-14 15:42:19 -06:00
John Alanbrook
86609c27f8 correct sections 2026-02-14 15:13:18 -06:00
John Alanbrook
356c51bde3 better array allocation 2026-02-14 14:44:00 -06:00
John Alanbrook
89421e11a4 pull out prettify mcode 2026-02-14 14:14:34 -06:00
John Alanbrook
e5fc04fecd faster mach compile 2026-02-14 14:02:15 -06:00
John Alanbrook
8ec56e85fa shop audit 2026-02-14 14:00:27 -06:00
John Alanbrook
f49ca530bb fix delete gc bug 2026-02-13 21:52:37 -06:00
John Alanbrook
83263379bd ocaml style rooting macros 2026-02-13 20:46:31 -06:00
John Alanbrook
e80e615634 fix array gc bug; new gc error chasing 2026-02-13 16:58:42 -06:00
John Alanbrook
c1430fd59b Merge branch 'fix_gc' into runtime_rework 2026-02-13 15:42:37 -06:00
John Alanbrook
db73eb4eeb Merge branch 'mcode_streamline' into runtime_rework 2026-02-13 15:42:20 -06:00
John Alanbrook
f2556c5622 proper shop caching 2026-02-13 09:04:25 -06:00
John Alanbrook
291304f75d new way to track actor bad memory access 2026-02-13 09:03:33 -06:00
John Alanbrook
3795533554 clean up bytecode 2026-02-13 09:03:00 -06:00
John Alanbrook
d26a96bc62 cached bootstrap 2026-02-13 08:11:35 -06:00
John Alanbrook
0acaabd5fa merge add 2026-02-13 08:09:12 -06:00
John Alanbrook
1ba060668e growable buddy memory runtime 2026-02-13 07:59:52 -06:00
John Alanbrook
77fa058135 mach loading 2026-02-13 07:26:49 -06:00
John Alanbrook
f7e2ff13b5 guard hoisting 2026-02-13 06:32:58 -06:00
John Alanbrook
36fd0a35f9 Merge branch 'fix_gc' into mcode_streamline 2026-02-13 05:59:11 -06:00
John Alanbrook
77c02bf9bf simplify text 2026-02-13 05:59:01 -06:00
John Alanbrook
f251691146 Merge branch 'mach_memory' into mcode_streamline 2026-02-13 05:58:21 -06:00
John Alanbrook
e9ea6ec299 Merge branch 'runtime_rework' into mach_memory 2026-02-13 05:54:28 -06:00
John Alanbrook
bf5fdbc688 backward inference 2026-02-13 05:39:25 -06:00
John Alanbrook
b960d03eeb immediate ascii for string path 2026-02-13 05:35:11 -06:00
John Alanbrook
b4d42fb83d stone pool renamed to constant pool - more appropriate 2026-02-13 05:17:22 -06:00
John Alanbrook
0a680a0cd3 gc print 2026-02-13 05:03:45 -06:00
John Alanbrook
9f0fd84f4f fix growing gc 2026-02-13 04:33:32 -06:00
John Alanbrook
cb9d6e0c0e mmap for poison heap 2026-02-13 04:03:36 -06:00
John Alanbrook
4f18a0b524 tco 2026-02-13 03:57:18 -06:00
John Alanbrook
f296a0c10d fix segv 2026-02-13 03:08:27 -06:00
John Alanbrook
1df6553577 Merge branch 'runtime_rework' into mcode_streamline 2026-02-13 02:52:54 -06:00
John Alanbrook
30a9cfee79 simplify gc model 2026-02-13 02:33:25 -06:00
John Alanbrook
6fff96d9d9 lower intrinsics in mcode 2026-02-13 02:31:16 -06:00
John Alanbrook
4a50d0587d guards in mcode 2026-02-13 02:30:41 -06:00
John Alanbrook
e346348eb5 Merge branch 'fix_gc' into mcode_streamline 2026-02-12 19:15:13 -06:00
John Alanbrook
ff560973f3 Merge branch 'fix_gc' into runtime_rework 2026-02-12 18:57:44 -06:00
John Alanbrook
de4b3079d4 organize 2026-02-12 18:53:06 -06:00
John Alanbrook
29227e655b Merge branch 'pretty_mcode' into mcode_streamline 2026-02-12 18:48:17 -06:00
John Alanbrook
588e88373e Merge branch 'fix_ternary' into pretty_mcode 2026-02-12 18:46:04 -06:00
John Alanbrook
9aca365771 Merge branch 'runtime_rework' into pretty_mcode 2026-02-12 18:44:56 -06:00
John Alanbrook
c56d4d5c3c some cleanup 2026-02-12 18:44:09 -06:00
John Alanbrook
c1e101b24f benchmarks 2026-02-12 18:41:15 -06:00
John Alanbrook
9f0dfbc6a2 fix ternary operator in object literals 2026-02-12 18:33:43 -06:00
John Alanbrook
5c9403a43b compiler optimization output 2026-02-12 18:27:19 -06:00
John Alanbrook
89e34ba71d comprehensive testing for regression analysis 2026-02-12 18:15:03 -06:00
John Alanbrook
73bfa8d7b1 rm some functions 2026-02-12 18:08:56 -06:00
John Alanbrook
4aedb8b0c5 Merge branch 'cli_audit' into ir_artifact 2026-02-12 17:20:45 -06:00
John Alanbrook
ec072f3b63 Merge branch 'runtime_rework' into ir_artifact 2026-02-12 17:18:23 -06:00
John Alanbrook
65755d9c0c fix using old mach 2026-02-12 17:17:12 -06:00
John Alanbrook
19524b3a53 faster json decode 2026-02-12 17:06:48 -06:00
John Alanbrook
f901332c5b clean up cli 2026-02-12 16:45:10 -06:00
John Alanbrook
add136c140 Merge branch 'pretty_mcode' into runtime_rework 2026-02-12 16:36:58 -06:00
John Alanbrook
c1a99dfd4c mcode looks better 2026-02-12 16:36:53 -06:00
John Alanbrook
7b46c6e947 update docs 2026-02-12 16:34:45 -06:00
John Alanbrook
1efb0b1bc9 run with mcode 2026-02-12 16:14:46 -06:00
John Alanbrook
0ba2783b48 Merge branch 'bytecode_cleanup' into mach 2026-02-12 14:08:45 -06:00
John Alanbrook
6de542f0d0 Merge branch 'mach_suite_fix' into bytecode_cleanup 2026-02-12 12:32:06 -06:00
John Alanbrook
6ba4727119 rm call 2026-02-12 11:58:29 -06:00
John Alanbrook
900db912a5 streamline mcode 2026-02-12 09:43:13 -06:00
John Alanbrook
b771b2b5d8 suite passes now with mcode->mach lowering 2026-02-12 09:40:24 -06:00
John Alanbrook
68fb440502 Merge branch 'mach' into bytecode_cleanup 2026-02-12 07:50:09 -06:00
John Alanbrook
e7a2f16004 mcode to mach 2026-02-12 05:23:33 -06:00
John Alanbrook
3a8a17ab60 mcode->mach 2026-02-12 04:28:14 -06:00
John Alanbrook
8a84be65e1 new path 2026-02-11 14:41:37 -06:00
John Alanbrook
c1910ee1db Merge branch 'mcode2' into mach 2026-02-11 13:16:07 -06:00
John Alanbrook
7036cdf2d1 Merge branch 'mach' into bytecode_cleanup 2026-02-11 13:15:20 -06:00
John Alanbrook
fbeec17ce5 simplifications 2026-02-11 13:15:04 -06:00
John Alanbrook
2c55ae8cb2 quiesence exit 2026-02-11 11:50:29 -06:00
John Alanbrook
259bc139fc rm stack usage 2026-02-11 10:17:55 -06:00
John Alanbrook
a252412eca removal of old code 2026-02-11 09:47:30 -06:00
John Alanbrook
b327e16463 rm unused functions 2026-02-11 09:09:40 -06:00
John Alanbrook
da6f096a56 qbe rt 2026-02-10 20:28:51 -06:00
John Alanbrook
1320ef9f47 Merge branch 'mcode2' into mach 2026-02-10 19:04:35 -06:00
John Alanbrook
ed4a5474d5 Merge branch 'mach' into mcode2 2026-02-10 19:04:22 -06:00
John Alanbrook
f52dd80d52 fix compile error 2026-02-10 19:02:42 -06:00
John Alanbrook
504e268b9d run native modules 2026-02-10 18:52:11 -06:00
John Alanbrook
0d47002167 add compile script 2026-02-10 18:35:18 -06:00
John Alanbrook
b65db63447 remove vm_test, update test harness 2026-02-10 17:52:57 -06:00
John Alanbrook
c1ccff5437 fix >256 object literal error 2026-02-10 17:42:58 -06:00
John Alanbrook
2f681fa366 output for parser stages and c runtime doc 2026-02-10 17:38:15 -06:00
John Alanbrook
682b1cf9cf Merge branch 'pitweb' into mcode2 2026-02-10 17:29:03 -06:00
John Alanbrook
ddf3fc1c77 add object literal test 2026-02-10 17:28:59 -06:00
John Alanbrook
f1a5072ff2 fix increment operators on objects 2026-02-10 17:17:36 -06:00
John Alanbrook
f44fb502be string literal error 2026-02-10 17:02:22 -06:00
John Alanbrook
d75ce916d7 compile optimization 2026-02-10 16:37:11 -06:00
John Alanbrook
fe5dc6ecc9 fix fd.c bugs 2026-02-10 14:21:49 -06:00
John Alanbrook
54673e4a04 better disrupt logging; actor exit on crash 2026-02-10 12:38:06 -06:00
John Alanbrook
0d8b5cfb04 bootstrap loads engine 2026-02-10 12:13:18 -06:00
John Alanbrook
3d71f4a363 Merge branch 'mach' into pitweb 2026-02-10 11:15:44 -06:00
John Alanbrook
4deb0e2577 new syntax for internals 2026-02-10 11:03:01 -06:00
John Alanbrook
67b96e1627 add test for multiple declaration 2026-02-10 10:39:23 -06:00
John Alanbrook
4e5f1d8faa fix labeled loops, do-while, shorthand property syntax, and added more tests 2026-02-10 10:32:54 -06:00
John Alanbrook
bd577712d9 fix function shorthand default params 2026-02-10 10:13:46 -06:00
John Alanbrook
6df3b741cf add runtime warnings for stale files 2026-02-10 10:05:27 -06:00
John Alanbrook
178837b88d bootstrap 2026-02-10 09:53:41 -06:00
John Alanbrook
120ce9d30c Merge branch 'mcode2' into pitweb 2026-02-10 09:23:30 -06:00
John Alanbrook
58f185b379 fix merge 2026-02-10 09:21:33 -06:00
John Alanbrook
f7b5252044 core flag 2026-02-10 09:21:21 -06:00
John Alanbrook
ded5f7d74b cell shop env var 2026-02-10 09:13:10 -06:00
John Alanbrook
fe6033d6cb deploy website script 2026-02-10 08:12:51 -06:00
John Alanbrook
60e61eef76 scheduler starts 2026-02-10 08:12:42 -06:00
John Alanbrook
ad863fb89b postfix/prefix operators handled correctly 2026-02-10 08:12:27 -06:00
John Alanbrook
96f8157039 Merge branch 'mach' into mcode2 2026-02-10 07:38:35 -06:00
John Alanbrook
c4ff0bc109 intrinsics rewritten without ++, --, etc 2026-02-10 07:19:45 -06:00
John Alanbrook
877250b1d8 decomposed mcode 2026-02-10 07:12:27 -06:00
John Alanbrook
747227de40 better parse errors 2026-02-10 06:51:26 -06:00
John Alanbrook
3f7e34cd7a more useful parse errors 2026-02-10 06:08:15 -06:00
John Alanbrook
cef5c50169 add is_letter intrinsic 2026-02-10 06:00:47 -06:00
John Alanbrook
0428424ec7 Merge branch 'mach' into mcode2 2026-02-10 05:53:51 -06:00
John Alanbrook
78e64c5067 optimize parse 2026-02-10 05:53:49 -06:00
John Alanbrook
ff11c49c39 optimize tokenize 2026-02-10 05:52:19 -06:00
John Alanbrook
b8b110b616 bootstrap with serialized mach 2026-02-09 22:54:42 -06:00
John Alanbrook
930dcfba36 Merge branch 'mach' into mqbe 2026-02-09 22:22:15 -06:00
John Alanbrook
eeccb3b34a bootstrap 2026-02-09 22:21:55 -06:00
John Alanbrook
407797881c bytecode serialization 2026-02-09 22:19:41 -06:00
John Alanbrook
7069475729 Merge branch 'pitweb' into mcode2 2026-02-09 20:33:56 -06:00
John Alanbrook
3e42c57479 rm tokenizer/parser/mcode generators from C 2026-02-09 20:05:50 -06:00
John Alanbrook
4b76728230 ast folding 2026-02-09 20:04:40 -06:00
John Alanbrook
4ff9332d38 lsp 2026-02-09 18:53:13 -06:00
John Alanbrook
27e852af5b Merge branch 'mach' into mqbe 2026-02-09 18:46:10 -06:00
John Alanbrook
66a44595c8 fix errors with mcode 2026-02-09 18:45:55 -06:00
John Alanbrook
fc0a1547dc Merge branch 'mach' into mqbe 2026-02-09 18:36:47 -06:00
John Alanbrook
c0b4e70eb2 fix two gc bugs 2026-02-09 18:32:41 -06:00
John Alanbrook
f4714b2b36 qbe macros 2026-02-09 18:17:31 -06:00
John Alanbrook
7f691fd52b fix mach vm suite errors 2026-02-09 18:12:44 -06:00
John Alanbrook
d5209e1d59 fix issues with parse.cm and tokenize.cm 2026-02-09 17:43:44 -06:00
John Alanbrook
68e2395b92 mcode generators 2026-02-09 17:01:39 -06:00
John Alanbrook
1b747720b7 fix regex parser error 2026-02-09 14:34:33 -06:00
John Alanbrook
849123d8fc streamlined cell running 2026-02-09 13:12:05 -06:00
John Alanbrook
6ad919624b Merge branch 'mcode2' into mach 2026-02-09 12:58:05 -06:00
John Alanbrook
a11f3e7d47 Merge branch 'pitweb' into mach 2026-02-09 12:57:01 -06:00
John Alanbrook
3d1fd37979 rm quickjs vm 2026-02-09 12:54:55 -06:00
John Alanbrook
8fc9bfe013 parse and tokenize modules 2026-02-09 12:19:05 -06:00
John Alanbrook
368511f666 parse.ce and tokenize.ce 2026-02-09 11:56:09 -06:00
John Alanbrook
3934cdb683 fix disrupts 2026-02-09 11:28:10 -06:00
John Alanbrook
45556c344d Merge branch 'pitweb' into mach 2026-02-09 11:17:45 -06:00
John Alanbrook
bc87fe5f70 string indexing 2026-02-09 11:17:42 -06:00
John Alanbrook
790293d915 Merge branch 'mach' into pitweb 2026-02-09 11:15:44 -06:00
John Alanbrook
872cd6ab51 more correct syntax and AI instructions 2026-02-09 11:00:23 -06:00
John Alanbrook
e04ab4c30c bootstrap 2026-02-09 10:56:15 -06:00
John Alanbrook
0503acb7e6 rm block scope 2026-02-09 10:11:22 -06:00
John Alanbrook
d0c68d7a7d Merge branch 'mcode2' into pitweb 2026-02-09 10:00:28 -06:00
John Alanbrook
7469383e66 refactor into multiple files 2026-02-08 16:32:14 -06:00
John Alanbrook
1fee8f9f8b condense jsruntime and jscontext 2026-02-08 10:10:42 -06:00
John Alanbrook
a4f3b025c5 update 2026-02-08 08:25:48 -06:00
John Alanbrook
d18ea1b330 update engine.cm 2026-02-08 08:24:49 -06:00
John Alanbrook
4de0659474 allow tokens as properties 2026-02-08 00:34:15 -06:00
John Alanbrook
27a9b72b07 functino tests; default args for mach and mcode 2026-02-08 00:31:18 -06:00
John Alanbrook
a3622bd5bd better parser error reporting 2026-02-08 00:23:47 -06:00
John Alanbrook
2f6700415e add functinos 2026-02-07 23:38:39 -06:00
John Alanbrook
243d92f7f3 rm ?? and .? 2026-02-07 22:09:40 -06:00
John Alanbrook
8f9d026b9b use casesensitive json 2026-02-07 17:01:11 -06:00
John Alanbrook
2c9ac8f7b6 no json roundtrip for mcode 2026-02-07 16:29:04 -06:00
John Alanbrook
80f24e131f all suite asan errors fixed for mcode 2026-02-07 16:15:58 -06:00
John Alanbrook
a8f8af7662 Merge branch 'mach' into mcode2 2026-02-07 15:49:38 -06:00
John Alanbrook
f5b3494762 memfree for mcode 2026-02-07 15:49:36 -06:00
John Alanbrook
13a6f6c79d faster mach bytecode generation 2026-02-07 15:49:09 -06:00
John Alanbrook
1a925371d3 faster parsing 2026-02-07 15:38:36 -06:00
John Alanbrook
08d2bacb1f improve ast parsing time 2026-02-07 15:22:18 -06:00
John Alanbrook
7322153e57 Merge branch 'mach' into mcode2 2026-02-07 14:53:41 -06:00
John Alanbrook
cc72c4cb0f fix mem errors for mcode 2026-02-07 14:53:35 -06:00
John Alanbrook
ae1f09a28f fix all memory errors 2026-02-07 14:53:14 -06:00
John Alanbrook
3c842912a1 gc fixing in mach vm 2026-02-07 14:25:04 -06:00
John Alanbrook
7cacf32078 Merge branch 'mach' into mcode2 2026-02-07 14:24:52 -06:00
John Alanbrook
b740612761 gc fixing in mach vm 2026-02-07 14:24:49 -06:00
John Alanbrook
6001c2b4bb Merge branch 'mach' into mcode2 2026-02-07 14:19:19 -06:00
John Alanbrook
98625fa15b mcode fix tests 2026-02-07 14:19:17 -06:00
John Alanbrook
87fafa44c8 fix last error 2026-02-07 13:43:13 -06:00
John Alanbrook
45ce76aef7 fixes 2026-02-07 12:50:46 -06:00
John Alanbrook
32fb44857c 1 test failing now 2026-02-07 12:50:26 -06:00
John Alanbrook
31d67f6710 fix vm suite tests 2026-02-07 12:34:18 -06:00
John Alanbrook
bae4e957e9 hugo website for pit 2026-02-07 12:01:58 -06:00
John Alanbrook
3621b1ef33 Merge branch 'mach' into mcode2 2026-02-07 11:53:44 -06:00
John Alanbrook
836227c8d3 fix mach proxy and templates 2026-02-07 11:53:39 -06:00
John Alanbrook
0ae59705d4 fix errors 2026-02-07 11:53:26 -06:00
John Alanbrook
8e2607b6ca Merge branch 'mcode2' into mach 2026-02-07 10:54:19 -06:00
John Alanbrook
dc73e86d8c handle mcode in callinternal 2026-02-07 10:51:45 -06:00
John Alanbrook
555cceb9d6 fixed text runner 2026-02-07 10:51:27 -06:00
John Alanbrook
fbb7933eb6 Merge branch 'mcode2' into mach 2026-02-07 10:40:20 -06:00
John Alanbrook
0287d6ada4 regex uses C strings now 2026-02-07 10:28:35 -06:00
John Alanbrook
73cd6a255d more test fixing 2026-02-07 07:59:52 -06:00
John Alanbrook
83ea67c01b Merge branch 'mach' into mcode2 2026-02-07 00:10:01 -06:00
John Alanbrook
16059cca4e fix tests 2026-02-07 00:09:58 -06:00
John Alanbrook
9ffe60ebef vm suite 2026-02-07 00:09:41 -06:00
John Alanbrook
2beafec5d9 fix tests 2026-02-07 00:09:21 -06:00
John Alanbrook
aba8eb66bd crash fixes 2026-02-06 23:38:56 -06:00
John Alanbrook
1abcaa92c7 Merge branch 'mach' into mcode2 2026-02-06 23:20:55 -06:00
John Alanbrook
168f7c71d5 fix text header chasing 2026-02-06 23:20:48 -06:00
John Alanbrook
56ed895b6e Merge branch 'mach' into mcode2 2026-02-06 23:15:38 -06:00
John Alanbrook
1e4646999d fix mach crashes 2026-02-06 23:15:33 -06:00
John Alanbrook
68d6c907fe fix mcode compilation 2026-02-06 23:13:13 -06:00
John Alanbrook
8150c64c7d pitcode 2026-02-06 22:58:21 -06:00
John Alanbrook
024d796ca4 add asan error vm stacktrace 2026-02-06 21:49:53 -06:00
John Alanbrook
ea185dbffd rm typeof 2026-02-06 21:26:45 -06:00
John Alanbrook
6571262af0 mach disrupt support 2026-02-06 21:09:18 -06:00
John Alanbrook
77ae133747 Merge branch 'mcode2' into mach 2026-02-06 20:45:57 -06:00
John Alanbrook
142a2d518b Merge branch 'stacktrace' into mach 2026-02-06 20:44:43 -06:00
John Alanbrook
5b65c64fe5 stack traces 2026-02-06 20:44:38 -06:00
John Alanbrook
e985fa5fe1 disrupt/disruption; remove try/catch 2026-02-06 18:40:56 -06:00
John Alanbrook
160ade2410 smarter gc malloc for large allocations 2026-02-06 18:38:23 -06:00
John Alanbrook
e2bc5948c1 fix functions and closures in mach 2026-02-06 18:30:26 -06:00
John Alanbrook
8cf98d8a9e Merge branch 'mcode2' into mach 2026-02-06 15:14:40 -06:00
John Alanbrook
3c38e828e5 context free tokenizing, parsing, compiling 2026-02-06 15:14:18 -06:00
John Alanbrook
af2d296f40 use new parser info 2026-02-06 12:45:25 -06:00
John Alanbrook
0a45394689 fix crash related to allocating in context heap 2026-02-06 12:43:19 -06:00
John Alanbrook
32885a422f bring in mcode 2026-02-06 04:24:14 -06:00
John Alanbrook
8959e53303 Merge branch 'newsyn' into mcode2 2026-02-06 03:55:56 -06:00
John Alanbrook
8a9a02b131 Merge branch 'newsyn' into mach 2026-02-06 03:54:38 -06:00
John Alanbrook
f9d68b2990 fix if/else, chained assignment 2026-02-06 03:54:25 -06:00
John Alanbrook
017a57b1eb use new parser information 2026-02-06 03:44:44 -06:00
John Alanbrook
ff8c68d01c mcode and mcode interpreter 2026-02-06 03:31:31 -06:00
John Alanbrook
9212003401 cannot set unbound 2026-02-06 03:24:01 -06:00
John Alanbrook
f9f8a4db42 Merge branch 'newsyn' into mach 2026-02-06 03:10:14 -06:00
John Alanbrook
8db95c654b more info in AST parser 2026-02-06 03:00:46 -06:00
John Alanbrook
63feabed5d mach vm 2026-02-06 02:50:48 -06:00
John Alanbrook
c814c0e1d8 rm new; rm void 2026-02-06 02:12:19 -06:00
John Alanbrook
bead0c48d4 Merge branch 'mcode' into newsyn 2026-02-06 02:02:46 -06:00
John Alanbrook
98dcab4ba7 comprehensive syntax test; fix multiple default args 2026-02-06 02:02:17 -06:00
John Alanbrook
ae44ce7b4b mcode and mach 2026-02-06 01:56:26 -06:00
John Alanbrook
1c38699b5a fix scope resolution 2026-02-06 01:41:03 -06:00
John Alanbrook
9a70a12d82 object literal 2026-02-05 21:41:34 -06:00
John Alanbrook
a8a271e014 Merge branch 'syntax' into ast 2026-02-05 20:39:56 -06:00
John Alanbrook
91761c03e6 push/pop syntax 2026-02-05 20:39:53 -06:00
John Alanbrook
5a479cc765 function literal in record literal 2026-02-05 20:32:57 -06:00
John Alanbrook
97a003e025 errors 2026-02-05 20:12:06 -06:00
John Alanbrook
20f14abd17 string templates 2026-02-05 19:34:06 -06:00
John Alanbrook
19ba184fec default params for functions 2026-02-05 18:44:40 -06:00
John Alanbrook
7909b11f6b better errors 2026-02-05 18:35:48 -06:00
John Alanbrook
27229c675c add parser and tokenizer errors 2026-02-05 18:14:49 -06:00
John Alanbrook
64d234ee35 Merge branch 'syntax' into ast 2026-02-05 17:45:15 -06:00
John Alanbrook
e861d73eec mkarecord 2026-02-05 17:45:13 -06:00
John Alanbrook
a24331aae5 tokenize 2026-02-05 11:21:34 -06:00
John Alanbrook
c1cb922b64 more comprehensive ast 2026-02-05 10:59:56 -06:00
John Alanbrook
aacb0b48bf more vm tests 2026-02-05 10:44:53 -06:00
John Alanbrook
b38aec95b6 Merge branch 'syntax' into ast 2026-02-05 10:29:29 -06:00
John Alanbrook
b29d3c2fe0 add vm tests 2026-02-05 10:29:09 -06:00
John Alanbrook
1cc3005b68 better jump labels 2026-02-05 10:28:13 -06:00
John Alanbrook
b86cd042fc vm unit tests 2026-02-05 10:21:16 -06:00
John Alanbrook
8b7af0c22a vm bytecode output 2026-02-05 10:14:14 -06:00
John Alanbrook
f71f6a296b register vm 2026-02-05 06:55:45 -06:00
John Alanbrook
9bd764b11b add go 2026-02-05 03:10:06 -06:00
John Alanbrook
058cdfd2e4 groundwork for vm 2026-02-05 02:59:16 -06:00
John Alanbrook
1ef837c6ff rm bound function stuff 2026-02-05 02:36:14 -06:00
John Alanbrook
cd21de3d70 rm realm concept on function 2026-02-05 02:33:50 -06:00
John Alanbrook
a98faa4dbb debugging 2026-02-05 02:27:26 -06:00
John Alanbrook
08559234c4 fix closures 2026-02-05 02:07:18 -06:00
John Alanbrook
c3dc27eac6 machine code 2026-02-04 23:45:51 -06:00
John Alanbrook
7170a9c7eb ast 2026-02-04 22:20:57 -06:00
John Alanbrook
a08ee50f84 serializable bytecode 2026-02-04 20:57:44 -06:00
John Alanbrook
ed7dd91c3f rm global 2026-02-04 18:57:45 -06:00
John Alanbrook
3abe20fee0 merge 2026-02-04 18:38:46 -06:00
John Alanbrook
a92a96118e remove eval parser; consolidate addintrinsic 2026-02-04 17:15:03 -06:00
John Alanbrook
4e407fe301 migrate nota, wota into quickjs.c 2026-02-04 17:03:48 -06:00
John Alanbrook
ab74cdc173 merge warningfix 2026-02-04 16:17:52 -06:00
John Alanbrook
2c9d039271 massive cleanup 2026-02-04 14:26:17 -06:00
John Alanbrook
80d314c58f Merge templatefix branch
Use PPretext for parser string building to avoid GC issues during parsing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 14:21:25 -06:00
John Alanbrook
611fba2b6f fix regexp parsing 2026-02-04 14:19:39 -06:00
John Alanbrook
f5fad52d47 Rewrite template literals with OP_format_template
Replace complex template literal handling with a simple format-based
approach. Template literals like `hello ${x}` now compile to:
  <push x>
  OP_format_template expr_count=1, cpool_idx=N
where cpool[N] = "hello {0}"

The opcode handler parses the format string, substitutes {N} placeholders
with stringified stack values, and produces the result string.

Key implementation details:
- Uses PPretext (parser pretext) with pjs_malloc to avoid GC issues
- Re-reads b->cpool[cpool_idx] after any GC-triggering operation
- Opcode layout is u16 expr_count followed by u32 cpool_idx - the u16
  must come first because compute_stack_size reads the pop count from
  position 1 for npop_u16 format opcodes

Removed:
- OP_template_concat opcode and handler
- Tagged template literal support (users can use format() directly)
- FuncCallType enum (FUNC_CALL_TEMPLATE case no longer needed)
- Complex template object creation logic in js_parse_template

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-04 14:18:02 -06:00
John Alanbrook
2fc7d333ad new environment tact for engine 2026-02-04 14:12:57 -06:00
John Alanbrook
d4635f2a75 remove unused vars, fix warnings 2026-02-04 13:49:43 -06:00
505 changed files with 209435 additions and 36247 deletions

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
*.mach binary merge=ours

4
.gitignore vendored
View File

@@ -1,6 +1,7 @@
.git/
.obj/
website/
website/public/
website/.hugo_build.lock
bin/
build/
*.zip
@@ -15,6 +16,7 @@ build/
source/shaders/*.h
.DS_Store
*.html
!website/themes/**/*.html
.vscode
*.icns
icon.ico

233
CLAUDE.md
View File

@@ -1,25 +1,222 @@
# Code style
All code is done with 2 spaces for indentation.
# ƿit (pit) Language Project
For cell script and its integration files, objects are preferred over classes, and preferrably limited use of prototypes, make objects sendable between actors (.ce files).
## Building
## cell script format
Cell script files end in .ce or .cm. Cell script is similar to Javascript but with some differences.
Build (or rebuild after changes): `make`
Install to system: `make install`
Run `cell --help` to see all CLI flags.
Variables are delcared with 'var'. Var behaves like let.
Constants are declared with 'def'.
!= and == are strict, there is no !== or ===.
There is no undefined, only null.
There are no classes, only objects and prototypes.
Prefer backticks for string interpolation. Otherwise, convering non strings with the text() function is required.
Everything should be lowercase.
## Code Style
There are no arraybuffers, only blobs, which work with bits. They must be stoned like stone(blob) before being read from.
All code uses 2 spaces for indentation. K&R style for C and Javascript.
## c format
For cell script integration files, everything should be declared static that can be. Most don't have headers at all. Files in a package are not shared between packages.
## ƿit Script Quick Reference
There is no undefined, so JS_IsNull and JS_NULL should be used only.
ƿit script files: `.ce` (actors) and `.cm` (modules). The syntax is similar to JavaScript with important differences listed below.
## how module loading is done in cell script
Within a package, a c file, if using the correct macros (CELL_USE_FUNCS etc), will be loaded as a module with its name; so png.c inside ac package is loaded as <package>/png, giving you access to its functions.
### Key Differences from JavaScript
- `var` (mutable) and `def` (constant) — no `let` or `const`
- `==` and `!=` are strict (no `===` or `!==`)
- No `undefined` — only `null`
- No classes — only objects and prototypes (`meme()`, `proto()`, `isa()`)
- No `switch`/`case` — use record dispatch (a record keyed by case, values are functions or results) instead of if/else chains
- No `for...in`, `for...of`, spread (`...`), rest params, or default params
- Functions have a maximum of 4 parameters — use a record for more
- Variables must be declared at function body level only (not in if/while/for/blocks)
- All variables must be initialized at declaration (`var x` alone is an error; use `var x = null`)
- No `try`/`catch`/`throw` — use `disrupt`/`disruption`
- No arraybuffers — only `blob` (works with bits; must `stone(blob)` before reading)
- Identifiers can contain `?` and `!` (e.g., `nil?`, `set!`, `is?valid`)
- Prefer backticks for string interpolation; otherwise use `text()` to convert non-strings
- Everything should be lowercase
### Intrinsic Functions (always available, no `use()` needed)
The creator functions are **polymorphic** — behavior depends on argument types:
- `array(number)` — create array of size N filled with null
- `array(number, value_or_fn)` — create array with initial values
- `array(array)` — copy array
- `array(array, fn)` — map
- `array(array, array)` — concatenate
- `array(array, from, to)` — slice
- `array(record)` — get keys as array of text
- **`array(text)` — split text into individual characters** (e.g., `array("hello")``["h","e","l","l","o"]`)
- `array(text, separator)` — split by separator
- `array(text, length)` — split into chunks of length
- `text(array, separator)` — join array into text
- `text(number)` or `text(number, radix)` — number to text
- `text(text, from, to)` — substring
- `number(text)` or `number(text, radix)` — parse text to number
- `number(logical)` — boolean to number
- `record(record)` — copy
- `record(record, another)` — merge
- `record(array_of_keys)` — create record from keys
Other key intrinsics: `length()`, `stone()`, `is_stone()`, `print()`, `filter()`, `find()`, `reduce()`, `sort()`, `reverse()`, `some()`, `every()`, `starts_with()`, `ends_with()`, `meme()`, `proto()`, `isa()`, `splat()`, `apply()`, `extract()`, `replace()`, `search()`, `format()`, `lower()`, `upper()`, `trim()`
Sensory functions: `is_array()`, `is_text()`, `is_number()`, `is_object()`, `is_function()`, `is_null()`, `is_logical()`, `is_integer()`, `is_stone()`, etc.
### Standard Library (loaded with `use()`)
- `blob` — binary data (bits, not bytes)
- `time` — time constants and conversions
- `math` — trig, logarithms, roots (`math/radians`, `math/turns`)
- `json` — JSON encoding/decoding
- `random` — random number generation
### Actor Model
- `.ce` files are actors (independent execution units, don't return values)
- `.cm` files are modules (return a value, cached and frozen)
- Actors never share memory; communicate via `$send()` message passing
- Actor intrinsics start with `$`: `$me`, `$stop()`, `$send()`, `$start()`, `$delay()`, `$receiver()`, `$clock()`, `$portal()`, `$contact()`, `$couple()`, `$unneeded()`, `$connection()`, `$time_limit()`
### Requestors (async composition)
`sequence()`, `parallel()`, `race()`, `fallback()` — compose asynchronous operations. See docs/requestors.md.
### Error Handling
```javascript
var fn = function() {
disrupt // bare keyword, no value
} disruption {
// handle error; can re-raise with disrupt
}
```
### Push/Pop Syntax
```javascript
var a = [1, 2]
a[] = 3 // push: [1, 2, 3]
var v = a[] // pop: v is 3, a is [1, 2]
```
## C Integration
- Declare everything `static` that can be
- Most files don't have headers; files in a package are not shared between packages
- No undefined in C API: use `JS_IsNull` and `JS_NULL` only
- A C file with correct macros (`CELL_USE_FUNCS` etc) is loaded as a module by its name (e.g., `png.c` in a package → `use('<package>/png')`)
- C symbol naming: `js_<pkg>_<file>_use` (e.g., `js_core_math_radians_use` for `core/math/radians`)
- Core is the `core` package — its symbols follow the same `js_core_<name>_use` pattern as all other packages
- Package directories should contain only source files (no `.mach`/`.mcode` alongside source)
- Build cache files in `build/` are bare hashes (no extensions)
### MANDATORY: GC Rooting for C Functions
This project uses a **copying garbage collector**. ANY JS allocation (`JS_NewObject`, `JS_NewString`, `JS_NewArray`, `JS_NewInt32`, `JS_SetPropertyStr`, `js_new_blob_stoned_copy`, etc.) can trigger GC, which **invalidates all unrooted JSValue locals**. This is not theoretical — it causes real crashes.
**Before writing or modifying ANY C function**, apply this checklist:
1. Count the number of `JS_New*`, `JS_SetProperty*`, and `js_new_blob*` calls in the function
2. If there are 2 or more, the function MUST use `JS_FRAME`/`JS_ROOT`/`JS_RETURN`
3. Every JSValue that is held across an allocating call must be rooted
**Pattern — object with properties:**
```c
JS_FRAME(js);
JS_ROOT(obj, JS_NewObject(js));
JS_SetPropertyStr(js, obj.val, "x", JS_NewInt32(js, 42));
JS_SetPropertyStr(js, obj.val, "name", JS_NewString(js, "hello"));
JS_RETURN(obj.val);
```
**Pattern — array with loop:**
```c
JS_FRAME(js);
JS_ROOT(arr, JS_NewArray(js));
for (int i = 0; i < count; i++) {
JS_ROOT(item, JS_NewObject(js));
JS_SetPropertyStr(js, item.val, "v", JS_NewInt32(js, i));
JS_SetPropertyNumber(js, arr.val, i, item.val);
}
JS_RETURN(arr.val);
```
**Rules:**
- Access rooted values via `.val` (e.g., `obj.val`, not `obj`)
- Error returns before `JS_FRAME` use plain `return`
- Error returns after `JS_FRAME` must use `JS_RETURN_EX()` or `JS_RETURN_NULL()`
- When calling a helper that itself returns a JSValue, that return value is safe to pass directly into `JS_SetPropertyStr` — no need to root temporaries that aren't stored in a local
**Common mistake — UNSAFE (will crash under GC pressure):**
```c
JSValue obj = JS_NewObject(js); // NOT rooted
JS_SetPropertyStr(js, obj, "pixels", js_new_blob_stoned_copy(js, data, len));
// ^^^ blob allocation can GC, invalidating obj
return obj; // obj may be a dangling pointer
```
See `docs/c-modules.md` for the full GC safety reference.
## Project Layout
- `source/` — C source for the cell runtime and CLI
- `docs/` — master documentation (Markdown), reflected on the website
- `website/` — Hugo site; theme at `website/themes/knr/`
- `internal/` — internal ƿit scripts (engine.cm etc.)
- `packages/` — core packages
- `Makefile` — build system (`make` to rebuild, `make bootstrap` for first build)
## Package Management (Shop CLI)
When running locally with `./cell --dev`, these commands manage packages:
```
./cell --dev add <path> # add a package (local path or remote)
./cell --dev remove <path> # remove a package (cleans lock, symlink, dylibs)
./cell --dev build <path> # build C modules for a package
./cell --dev test package <path> # run tests for a package
./cell --dev list # list installed packages
```
Local paths are symlinked into `.cell/packages/`. The build step compiles C files to content-addressed dylibs in `~/.cell/build/<hash>` and writes a per-package manifest so the runtime can find them. C files in `src/` are support files linked into module dylibs, not standalone modules.
## Debugging Compiler Issues
When investigating bugs in compiled output (wrong values, missing operations, incorrect comparisons), **start from the optimizer down, not the VM up**. The compiler inspection tools will usually identify the problem faster than adding C-level tracing:
```
./cell --dev streamline --types <file> # show inferred slot types — look for wrong types
./cell --dev ir_report --events <file> # show every optimization applied and why
./cell --dev ir_report --types <file> # show type inference results per function
./cell --dev mcode --pretty <file> # show raw IR before optimization
./cell --dev streamline --ir <file> # show human-readable optimized IR
```
**Triage order:**
1. `streamline --types` — are slot types correct? Wrong type inference causes wrong optimizations.
2. `ir_report --events` — are type checks being incorrectly eliminated? Look for `known_type_eliminates_guard` on slots that shouldn't have known types.
3. `mcode --pretty` — is the raw IR correct before optimization? If so, the bug is in streamline.
4. Only dig into `source/mach.c` if the IR looks correct at all levels.
See `docs/compiler-tools.md` for the full tool reference and `docs/spec/streamline.md` for pass details.
## Testing
After any C runtime changes, run all three test suites before considering the work done:
```
make # rebuild
./cell --dev vm_suite # VM-level tests (641 tests)
./cell --dev test suite # language-level tests (493 tests)
./cell --dev fuzz # fuzzer (100 iterations)
```
All three must pass with 0 failures.
## Documentation
The `docs/` folder is the single source of truth. The website at `website/` mounts it via Hugo. Key files:
- `docs/language.md` — language syntax reference
- `docs/functions.md` — all built-in intrinsic functions
- `docs/actors.md` — actor model and actor intrinsics
- `docs/requestors.md` — async requestor pattern
- `docs/library/*.md` — intrinsic type reference (text, number, array, object) and standard library modules

107
Makefile
View File

@@ -1,82 +1,47 @@
# Development build: creates libcell_runtime.dylib + thin main wrapper
# This is the default target for working on cell itself
#
# If cell doesn't exist yet, use 'make bootstrap' first (requires meson)
# or manually build with meson once.
#
# The cell shop is at ~/.cell and core scripts are installed to ~/.cell/core
BUILD = build
BUILD_DBG = build_debug
INSTALL_BIN = /opt/homebrew/bin
INSTALL_LIB = /opt/homebrew/lib
INSTALL_INC = /opt/homebrew/include
CELL_SHOP = $(HOME)/.cell
CELL_SHOP = $(HOME)/.cell
CELL_CORE_PACKAGE = $(CELL_SHOP)/packages/core
all: $(BUILD)/build.ninja
meson compile -C $(BUILD)
cp $(BUILD)/libcell_runtime.dylib .
cp $(BUILD)/cell .
maker: install
$(BUILD)/build.ninja:
meson setup $(BUILD) -Dbuildtype=release
makecell:
cell pack core -o cell
cp cell /opt/homebrew/bin/
debug: $(BUILD_DBG)/build.ninja
meson compile -C $(BUILD_DBG)
cp $(BUILD_DBG)/libcell_runtime.dylib .
cp $(BUILD_DBG)/cell .
# Install core: symlink this directory to ~/.cell/core
install: bootstrap $(CELL_SHOP)
@echo "Linking cell core to $(CELL_CORE_PACKAGE)"
rm -rf $(CELL_CORE_PACKAGE)
ln -s $(PWD) $(CELL_CORE_PACKAGE)
cp cell /opt/homebrew/bin/
cp libcell_runtime.dylib /opt/homebrew/lib/
@echo "Core installed."
$(BUILD_DBG)/build.ninja:
meson setup $(BUILD_DBG) -Dbuildtype=debug -Db_sanitize=address
cell: libcell_runtime.dylib cell_main
cp cell_main cell
chmod +x cell
cp cell /opt/homebrew/bin/cell
cp libcell_runtime.dylib /opt/homebrew/lib/
install: all $(CELL_SHOP)
cp cell $(INSTALL_BIN)/cell
cp libcell_runtime.dylib $(INSTALL_LIB)/
cp source/cell.h source/quickjs.h source/wota.h $(INSTALL_INC)/
rm -rf $(CELL_SHOP)/packages/core
ln -s $(CURDIR) $(CELL_SHOP)/packages/core
@echo "Installed cell to $(INSTALL_BIN) and $(INSTALL_LIB)"
# Build the shared runtime library (everything except main.c)
# Uses existing cell to run build -d
libcell_runtime.dylib: $(CELL_SHOP)/build/dynamic
cell build -d
cp $(CELL_SHOP)/build/dynamic/libcell_runtime.dylib .
install_debug: debug $(CELL_SHOP)
cp cell $(INSTALL_BIN)/cell
cp libcell_runtime.dylib $(INSTALL_LIB)/
cp source/cell.h source/quickjs.h source/wota.h $(INSTALL_INC)/
rm -rf $(CELL_SHOP)/packages/core
ln -s $(CURDIR) $(CELL_SHOP)/packages/core
@echo "Installed cell (debug+asan) to $(INSTALL_BIN) and $(INSTALL_LIB)"
# Build the thin main wrapper that links to libcell_runtime
cell_main: source/main.c libcell_runtime.dylib
cc -o cell_main source/main.c -L. -lcell_runtime -Wl,-rpath,@loader_path -Wl,-rpath,/opt/homebrew/lib
# Create the cell shop directories
$(CELL_SHOP):
mkdir -p $(CELL_SHOP)
mkdir -p $(CELL_SHOP)/packages
mkdir -p $(CELL_SHOP)/cache
mkdir -p $(CELL_SHOP)/build
mkdir -p $(CELL_SHOP)/packages $(CELL_SHOP)/cache $(CELL_SHOP)/build
$(CELL_CORE):
ln -s $(PWD) $(CELL_CORE)
# Static build: creates a fully static cell binary (for distribution)
static:
cell build
cp $(CELL_SHOP)/build/static/cell .
# Bootstrap: build cell from scratch using meson (only needed once)
# Also installs core scripts to ~/.cell/core
bootstrap:
meson setup build_bootstrap -Dbuildtype=debug -Db_sanitize=address
meson compile -C build_bootstrap
cp build_bootstrap/cell .
cp build_bootstrap/libcell_runtime.dylib .
@echo "Bootstrap complete. Cell shop initialized at $(CELL_SHOP)"
@echo "Now run 'make' to rebuild with cell itself."
# Clean build artifacts
clean:
rm -rf $(CELL_SHOP)/build build_bootstrap
rm -f cell cell_main libcell_runtime.dylib
rm -rf $(BUILD) $(BUILD_DBG)
rm -f cell libcell_runtime.dylib
# Ensure dynamic build directory exists
$(CELL_SHOP)/build/dynamic: $(CELL_SHOP)
mkdir -p $(CELL_SHOP)/build/dynamic
# Legacy meson target
meson:
meson setup build_dbg -Dbuildtype=debugoptimized
meson install -C build_dbg
.PHONY: cell static bootstrap clean meson install
.PHONY: all install debug install_debug clean

192
add.ce
View File

@@ -3,101 +3,133 @@
// Usage:
// cell add <locator> Add a dependency using default alias
// cell add <locator> <alias> Add a dependency with custom alias
// cell add -r <directory> Recursively find and add all packages in directory
//
// This adds the dependency to cell.toml and installs it to the shop.
var shop = use('internal/shop')
var pkg = use('package')
var build = use('build')
var fd = use('fd')
var locator = null
var alias = null
var recursive = false
var cwd = fd.realpath('.')
var parts = null
var locators = null
var added = 0
var failed = 0
var _add_dep = null
var _install = null
var i = 0
array(args, function(arg) {
if (arg == '--help' || arg == '-h') {
log.console("Usage: cell add <locator> [alias]")
log.console("")
log.console("Add a dependency to the current package.")
log.console("")
log.console("Examples:")
log.console(" cell add gitea.pockle.world/john/prosperon")
log.console(" cell add gitea.pockle.world/john/cell-image image")
log.console(" cell add ../local-package")
$stop()
} else if (!starts_with(arg, '-')) {
if (!locator) {
locator = arg
} else if (!alias) {
alias = arg
var run = function() {
for (i = 0; i < length(args); i++) {
if (args[i] == '--help' || args[i] == '-h') {
log.console("Usage: cell add <locator> [alias]")
log.console("")
log.console("Add a dependency to the current package.")
log.console("")
log.console("Examples:")
log.console(" cell add gitea.pockle.world/john/prosperon")
log.console(" cell add gitea.pockle.world/john/cell-image image")
log.console(" cell add ../local-package")
log.console(" cell add -r ../packages")
return
} else if (args[i] == '-r') {
recursive = true
} else if (!starts_with(args[i], '-')) {
if (!locator) {
locator = args[i]
} else if (!alias) {
alias = args[i]
}
}
}
})
if (!locator) {
log.console("Usage: cell add <locator> [alias]")
$stop()
}
// Resolve relative paths to absolute paths
if (locator == '.' || starts_with(locator, './') || starts_with(locator, '../') || fd.is_dir(locator)) {
var resolved = fd.realpath(locator)
if (resolved) {
locator = resolved
}
}
// Generate default alias from locator
if (!alias) {
// Use the last component of the locator as alias
var parts = array(locator, '/')
alias = parts[length(parts) - 1]
// Remove any version suffix
if (search(alias, '@') != null) {
alias = array(alias, '@')[0]
}
}
// Check we're in a package directory
var cwd = fd.realpath('.')
if (!fd.is_file(cwd + '/cell.toml')) {
log.error("Not in a package directory (no cell.toml found)")
$stop()
}
log.console("Adding " + locator + " as '" + alias + "'...")
// Add to local project's cell.toml
try {
pkg.add_dependency(null, locator, alias)
log.console(" Added to cell.toml")
} catch (e) {
log.error("Failed to update cell.toml: " + e)
$stop()
}
// Install to shop
try {
shop.get(locator)
shop.extract(locator)
// Build scripts
shop.build_package_scripts(locator)
// Build C code if any
try {
var target = build.detect_host_target()
build.build_dynamic(locator, target, 'release')
} catch (e) {
// Not all packages have C code
if (!locator && !recursive) {
log.console("Usage: cell add <locator> [alias]")
return
}
log.console(" Installed to shop")
} catch (e) {
log.error("Failed to install: " + e)
$stop()
}
if (locator)
locator = shop.resolve_locator(locator)
log.console("Added " + alias + " (" + locator + ")")
// Generate default alias from locator
if (!alias && locator) {
parts = array(locator, '/')
alias = parts[length(parts) - 1]
if (search(alias, '@') != null)
alias = array(alias, '@')[0]
}
// Check we're in a package directory
if (!fd.is_file(cwd + '/cell.toml')) {
log.error("Not in a package directory (no cell.toml found)")
return
}
// Recursive mode
if (recursive) {
if (!locator) locator = '.'
locator = shop.resolve_locator(locator)
if (!fd.is_dir(locator)) {
log.error(`${locator} is not a directory`)
return
}
locators = filter(pkg.find_packages(locator), function(p) {
return p != cwd
})
if (length(locators) == 0) {
log.console("No packages found in " + locator)
return
}
log.console(`Found ${text(length(locators))} package(s) in ${locator}`)
added = 0
failed = 0
arrfor(locators, function(loc) {
var loc_parts = array(loc, '/')
var loc_alias = loc_parts[length(loc_parts) - 1]
log.console(" Adding " + loc + " as '" + loc_alias + "'...")
var _add = function() {
pkg.add_dependency(null, loc, loc_alias)
shop.sync(loc)
added = added + 1
} disruption {
log.console(` Warning: Failed to add ${loc}`)
failed = failed + 1
}
_add()
})
log.console("Added " + text(added) + " package(s)." + (failed > 0 ? " Failed: " + text(failed) + "." : ""))
return
}
// Single package add
log.console("Adding " + locator + " as '" + alias + "'...")
_add_dep = function() {
pkg.add_dependency(null, locator, alias)
log.console(" Added to cell.toml")
} disruption {
log.error("Failed to update cell.toml")
return
}
_add_dep()
_install = function() {
shop.sync_with_deps(locator)
log.console(" Installed to shop")
} disruption {
log.error("Failed to install")
return
}
_install()
log.console("Added " + alias + " (" + locator + ")")
}
run()
$stop()

144
analyze.cm Normal file
View File

@@ -0,0 +1,144 @@
// analyze.cm — Static analysis over index data.
//
// All functions take an index object (from index.cm) and return structured results.
// Does not depend on streamline — operates purely on source-semantic data.
var analyze = {}
// Find all references to a name, with optional scope filter.
// scope: "top" (enclosing == null), "fn" (enclosing != null), null (all)
analyze.find_refs = function(idx, name, scope) {
var hits = []
var i = 0
var ref = null
while (i < length(idx.references)) {
ref = idx.references[i]
if (ref.name == name) {
if (scope == null) {
hits[] = ref
} else if (scope == "top" && ref.enclosing == null) {
hits[] = ref
} else if (scope == "fn" && ref.enclosing != null) {
hits[] = ref
}
}
i = i + 1
}
return hits
}
// Find all <name>.<property> usage patterns (channel analysis).
// Only counts unshadowed uses (name not declared as local var in scope).
analyze.channels = function(idx, name) {
var channels = {}
var summary = {}
var i = 0
var cs = null
var callee = null
var prop = null
var prefix_dot = name + "."
while (i < length(idx.call_sites)) {
cs = idx.call_sites[i]
callee = cs.callee
if (callee != null && starts_with(callee, prefix_dot)) {
prop = text(callee, length(prefix_dot), length(callee))
if (channels[prop] == null) {
channels[prop] = []
}
channels[prop][] = {span: cs.span}
if (summary[prop] == null) {
summary[prop] = 0
}
summary[prop] = summary[prop] + 1
}
i = i + 1
}
return {channels: channels, summary: summary}
}
// Find declarations by name, with optional kind filter.
// kind: "var", "def", "fn", "param", or null (any)
analyze.find_decls = function(idx, name, kind) {
var hits = []
var i = 0
var sym = null
while (i < length(idx.symbols)) {
sym = idx.symbols[i]
if (sym.name == name) {
if (kind == null || sym.kind == kind) {
hits[] = sym
}
}
i = i + 1
}
return hits
}
// Find intrinsic usage by name.
analyze.find_intrinsic = function(idx, name) {
var hits = []
var i = 0
var ref = null
if (idx.intrinsic_refs == null) return hits
while (i < length(idx.intrinsic_refs)) {
ref = idx.intrinsic_refs[i]
if (ref.name == name) {
hits[] = ref
}
i = i + 1
}
return hits
}
// Call sites with >4 args — always a compile error (max arity is 4).
analyze.excess_args = function(idx) {
var hits = []
var i = 0
var cs = null
while (i < length(idx.call_sites)) {
cs = idx.call_sites[i]
if (cs.args_count > 4) {
hits[] = {span: cs.span, callee: cs.callee, args_count: cs.args_count}
}
i = i + 1
}
return hits
}
// Extract module export shape from index data (for cross-module analysis).
analyze.module_summary = function(idx) {
var exports = {}
var i = 0
var j = 0
var exp = null
var sym = null
var found = false
if (idx.exports == null) return {exports: exports}
while (i < length(idx.exports)) {
exp = idx.exports[i]
found = false
if (exp.symbol_id != null) {
j = 0
while (j < length(idx.symbols)) {
sym = idx.symbols[j]
if (sym.symbol_id == exp.symbol_id) {
if (sym.kind == "fn" && sym.params != null) {
exports[exp.name] = {type: "function", arity: length(sym.params)}
} else {
exports[exp.name] = {type: sym.kind}
}
found = true
break
}
j = j + 1
}
}
if (!found) {
exports[exp.name] = {type: "unknown"}
}
i = i + 1
}
return {exports: exports}
}
return analyze

View File

@@ -42,19 +42,19 @@ static JSValue js_miniz_read(JSContext *js, JSValue self, int argc, JSValue *arg
{
size_t len;
void *data = js_get_blob_data(js, &len, argv[0]);
if (data == -1)
if (data == (void *)-1)
return JS_EXCEPTION;
mz_zip_archive *zip = calloc(sizeof(*zip), 1);
if (!zip)
return JS_ThrowOutOfMemory(js);
return JS_RaiseOOM(js);
mz_bool success = mz_zip_reader_init_mem(zip, data, len, 0);
if (!success) {
int err = mz_zip_get_last_error(zip);
free(zip);
return JS_ThrowInternalError(js, "Failed to initialize zip reader: %s", mz_zip_get_error_string(err));
return JS_RaiseDisrupt(js, "Failed to initialize zip reader: %s", mz_zip_get_error_string(err));
}
JSValue jszip = JS_NewObjectClass(js, js_reader_class_id);
@@ -71,7 +71,7 @@ static JSValue js_miniz_write(JSContext *js, JSValue self, int argc, JSValue *ar
mz_zip_archive *zip = calloc(sizeof(*zip), 1);
if (!zip) {
JS_FreeCString(js, file);
return JS_ThrowOutOfMemory(js);
return JS_RaiseOOM(js);
}
mz_bool success = mz_zip_writer_init_file(zip, file, 0);
@@ -81,7 +81,7 @@ static JSValue js_miniz_write(JSContext *js, JSValue self, int argc, JSValue *ar
int err = mz_zip_get_last_error(zip);
mz_zip_writer_end(zip);
free(zip);
return JS_ThrowInternalError(js, "Failed to initialize zip writer: %s", mz_zip_get_error_string(err));
return JS_RaiseDisrupt(js, "Failed to initialize zip writer: %s", mz_zip_get_error_string(err));
}
JSValue jszip = JS_NewObjectClass(js, js_writer_class_id);
@@ -93,7 +93,7 @@ static JSValue js_miniz_compress(JSContext *js, JSValue this_val,
int argc, JSValueConst *argv)
{
if (argc < 1)
return JS_ThrowTypeError(js,
return JS_RaiseDisrupt(js,
"compress needs a string or ArrayBuffer");
/* ─── 1. Grab the input data ──────────────────────────────── */
@@ -109,35 +109,38 @@ static JSValue js_miniz_compress(JSContext *js, JSValue this_val,
in_ptr = cstring;
} else {
in_ptr = js_get_blob_data(js, &in_len, argv[0]);
if (in_ptr == -1)
if (in_ptr == (const void *)-1)
return JS_EXCEPTION;
}
/* ─── 2. Allocate an output buffer big enough ────────────── */
/* ─── 2. Allocate output blob (before getting blob input ptr) ── */
mz_ulong out_len_est = mz_compressBound(in_len);
void *out_buf = js_malloc(js, out_len_est);
if (!out_buf) {
void *out_ptr;
JSValue abuf = js_new_blob_alloc(js, (size_t)out_len_est, &out_ptr);
if (JS_IsException(abuf)) {
if (cstring) JS_FreeCString(js, cstring);
return JS_EXCEPTION;
return abuf;
}
/* Re-derive blob input pointer after alloc (GC may have moved it) */
if (!cstring) {
in_ptr = js_get_blob_data(js, &in_len, argv[0]);
}
/* ─── 3. Do the compression (MZ_DEFAULT_COMPRESSION = level 6) */
mz_ulong out_len = out_len_est;
int st = mz_compress2(out_buf, &out_len,
int st = mz_compress2(out_ptr, &out_len,
in_ptr, in_len, MZ_DEFAULT_COMPRESSION);
/* clean-up for string input */
if (cstring) JS_FreeCString(js, cstring);
if (st != MZ_OK) {
js_free(js, out_buf);
return JS_ThrowInternalError(js,
if (st != MZ_OK)
return JS_RaiseDisrupt(js,
"miniz: compression failed (%d)", st);
}
/* ─── 4. Hand JavaScript a copy of the compressed data ────── */
JSValue abuf = js_new_blob_stoned_copy(js, out_buf, out_len);
js_free(js, out_buf);
/* ─── 4. Stone with actual compressed size ────────────────── */
js_blob_stone(abuf, (size_t)out_len);
return abuf;
}
@@ -147,13 +150,13 @@ static JSValue js_miniz_decompress(JSContext *js,
JSValueConst *argv)
{
if (argc < 1)
return JS_ThrowTypeError(js,
return JS_RaiseDisrupt(js,
"decompress: need compressed ArrayBuffer");
/* grab compressed data */
size_t in_len;
void *in_ptr = js_get_blob_data(js, &in_len, argv[0]);
if (in_ptr == -1)
if (in_ptr == (void *)-1)
return JS_EXCEPTION;
/* zlib header present → tell tinfl to parse it */
@@ -163,7 +166,7 @@ static JSValue js_miniz_decompress(JSContext *js,
TINFL_FLAG_PARSE_ZLIB_HEADER);
if (!out_ptr)
return JS_ThrowInternalError(js,
return JS_RaiseDisrupt(js,
"miniz: decompression failed");
JSValue ret;
@@ -187,16 +190,16 @@ static const JSCFunctionListEntry js_miniz_funcs[] = {
JSValue js_writer_add_file(JSContext *js, JSValue self, int argc, JSValue *argv)
{
if (argc < 2)
return JS_ThrowTypeError(js, "add_file requires (path, arrayBuffer)");
return JS_RaiseDisrupt(js, "add_file requires (path, arrayBuffer)");
mz_zip_archive *zip = js2writer(js, self);
const char *pathInZip = JS_ToCString(js, argv[0]);
if (!pathInZip)
return JS_ThrowTypeError(js, "Could not parse path argument");
return JS_RaiseDisrupt(js, "Could not parse path argument");
size_t dataLen;
void *data = js_get_blob_data(js, &dataLen, argv[1]);
if (data == -1) {
if (data == (void *)-1) {
JS_FreeCString(js, pathInZip);
return JS_EXCEPTION;
}
@@ -205,7 +208,7 @@ JSValue js_writer_add_file(JSContext *js, JSValue self, int argc, JSValue *argv)
JS_FreeCString(js, pathInZip);
if (!success)
return JS_ThrowInternalError(js, "Failed to add memory to zip");
return JS_RaiseDisrupt(js, "Failed to add memory to zip");
return JS_NULL;
}
@@ -225,7 +228,7 @@ JSValue js_reader_mod(JSContext *js, JSValue self, int argc, JSValue *argv)
mz_zip_archive *zip = js2reader(js, self);
if (!zip) {
JS_FreeCString(js, file);
return JS_ThrowInternalError(js, "Invalid zip reader");
return JS_RaiseDisrupt(js, "Invalid zip reader");
}
mz_zip_archive_file_stat pstat;
@@ -233,19 +236,19 @@ JSValue js_reader_mod(JSContext *js, JSValue self, int argc, JSValue *argv)
if (index == (mz_uint)-1) {
JS_FreeCString(js, file);
return JS_ThrowReferenceError(js, "File '%s' not found in archive", file);
return JS_RaiseDisrupt(js, "File '%s' not found in archive", file);
}
JS_FreeCString(js, file);
if (!mz_zip_reader_file_stat(zip, index, &pstat)) {
int err = mz_zip_get_last_error(zip);
return JS_ThrowInternalError(js, "Failed to get file stats: %s", mz_zip_get_error_string(err));
return JS_RaiseDisrupt(js, "Failed to get file stats: %s", mz_zip_get_error_string(err));
}
return JS_NewFloat64(js, pstat.m_time);
#else
return JS_ThrowInternalError(js, "MINIZ_NO_TIME is defined");
return JS_RaiseDisrupt(js, "MINIZ_NO_TIME is defined");
#endif
}
@@ -258,7 +261,7 @@ JSValue js_reader_exists(JSContext *js, JSValue self, int argc, JSValue *argv)
mz_zip_archive *zip = js2reader(js, self);
if (!zip) {
JS_FreeCString(js, file);
return JS_ThrowInternalError(js, "Invalid zip reader");
return JS_RaiseDisrupt(js, "Invalid zip reader");
}
mz_uint index = mz_zip_reader_locate_file(zip, file, NULL, 0);
@@ -276,7 +279,7 @@ JSValue js_reader_slurp(JSContext *js, JSValue self, int argc, JSValue *argv)
mz_zip_archive *zip = js2reader(js, self);
if (!zip) {
JS_FreeCString(js, file);
return JS_ThrowInternalError(js, "Invalid zip reader");
return JS_RaiseDisrupt(js, "Invalid zip reader");
}
size_t len;
@@ -286,7 +289,7 @@ JSValue js_reader_slurp(JSContext *js, JSValue self, int argc, JSValue *argv)
int err = mz_zip_get_last_error(zip);
const char *filename = file;
JS_FreeCString(js, file);
return JS_ThrowInternalError(js, "Failed to extract file '%s': %s", filename, mz_zip_get_error_string(err));
return JS_RaiseDisrupt(js, "Failed to extract file '%s': %s", filename, mz_zip_get_error_string(err));
}
JS_FreeCString(js, file);
@@ -300,7 +303,7 @@ JSValue js_reader_list(JSContext *js, JSValue self, int argc, JSValue *argv)
{
mz_zip_archive *zip = js2reader(js, self);
if (!zip)
return JS_ThrowInternalError(js, "Invalid zip reader");
return JS_RaiseDisrupt(js, "Invalid zip reader");
mz_uint num_files = mz_zip_reader_get_num_files(zip);
@@ -319,7 +322,7 @@ JSValue js_reader_list(JSContext *js, JSValue self, int argc, JSValue *argv)
JS_FreeValue(js, arr);
return filename;
}
JS_SetPropertyUint32(js, arr, arr_index++, filename);
JS_SetPropertyNumber(js, arr, arr_index++, filename);
}
return arr;
@@ -328,7 +331,7 @@ JSValue js_reader_list(JSContext *js, JSValue self, int argc, JSValue *argv)
JSValue js_reader_is_directory(JSContext *js, JSValue self, int argc, JSValue *argv)
{
if (argc < 1)
return JS_ThrowTypeError(js, "is_directory requires a file index");
return JS_RaiseDisrupt(js, "is_directory requires a file index");
int32_t index;
if (JS_ToInt32(js, &index, argv[0]))
@@ -336,7 +339,7 @@ JSValue js_reader_is_directory(JSContext *js, JSValue self, int argc, JSValue *a
mz_zip_archive *zip = js2reader(js, self);
if (!zip)
return JS_ThrowInternalError(js, "Invalid zip reader");
return JS_RaiseDisrupt(js, "Invalid zip reader");
return JS_NewBool(js, mz_zip_reader_is_file_a_directory(zip, index));
}
@@ -344,7 +347,7 @@ JSValue js_reader_is_directory(JSContext *js, JSValue self, int argc, JSValue *a
JSValue js_reader_get_filename(JSContext *js, JSValue self, int argc, JSValue *argv)
{
if (argc < 1)
return JS_ThrowTypeError(js, "get_filename requires a file index");
return JS_RaiseDisrupt(js, "get_filename requires a file index");
int32_t index;
if (JS_ToInt32(js, &index, argv[0]))
@@ -352,11 +355,11 @@ JSValue js_reader_get_filename(JSContext *js, JSValue self, int argc, JSValue *a
mz_zip_archive *zip = js2reader(js, self);
if (!zip)
return JS_ThrowInternalError(js, "Invalid zip reader");
return JS_RaiseDisrupt(js, "Invalid zip reader");
mz_zip_archive_file_stat file_stat;
if (!mz_zip_reader_file_stat(zip, index, &file_stat))
return JS_ThrowInternalError(js, "Failed to get file stats");
return JS_RaiseDisrupt(js, "Failed to get file stats");
return JS_NewString(js, file_stat.m_filename);
}
@@ -365,7 +368,7 @@ JSValue js_reader_count(JSContext *js, JSValue self, int argc, JSValue *argv)
{
mz_zip_archive *zip = js2reader(js, self);
if (!zip)
return JS_ThrowInternalError(js, "Invalid zip reader");
return JS_RaiseDisrupt(js, "Invalid zip reader");
return JS_NewUint32(js, mz_zip_reader_get_num_files(zip));
}
@@ -379,21 +382,23 @@ static const JSCFunctionListEntry js_reader_funcs[] = {
JS_CFUNC_DEF("count", 0, js_reader_count),
};
JSValue js_miniz_use(JSContext *js)
JSValue js_core_miniz_use(JSContext *js)
{
JS_FRAME(js);
JS_NewClassID(&js_reader_class_id);
JS_NewClass(JS_GetRuntime(js), js_reader_class_id, &js_reader_class);
JSValue reader_proto = JS_NewObject(js);
JS_SetPropertyFunctionList(js, reader_proto, js_reader_funcs, sizeof(js_reader_funcs) / sizeof(JSCFunctionListEntry));
JS_SetClassProto(js, js_reader_class_id, reader_proto);
JS_NewClass(js, js_reader_class_id, &js_reader_class);
JS_ROOT(reader_proto, JS_NewObject(js));
JS_SetPropertyFunctionList(js, reader_proto.val, js_reader_funcs, sizeof(js_reader_funcs) / sizeof(JSCFunctionListEntry));
JS_SetClassProto(js, js_reader_class_id, reader_proto.val);
JS_NewClassID(&js_writer_class_id);
JS_NewClass(JS_GetRuntime(js), js_writer_class_id, &js_writer_class);
JSValue writer_proto = JS_NewObject(js);
JS_SetPropertyFunctionList(js, writer_proto, js_writer_funcs, sizeof(js_writer_funcs) / sizeof(JSCFunctionListEntry));
JS_SetClassProto(js, js_writer_class_id, writer_proto);
JSValue export = JS_NewObject(js);
JS_SetPropertyFunctionList(js, export, js_miniz_funcs, sizeof(js_miniz_funcs)/sizeof(JSCFunctionListEntry));
return export;
JS_NewClass(js, js_writer_class_id, &js_writer_class);
JS_ROOT(writer_proto, JS_NewObject(js));
JS_SetPropertyFunctionList(js, writer_proto.val, js_writer_funcs, sizeof(js_writer_funcs) / sizeof(JSCFunctionListEntry));
JS_SetClassProto(js, js_writer_class_id, writer_proto.val);
JS_ROOT(export, JS_NewObject(js));
JS_SetPropertyFunctionList(js, export.val, js_miniz_funcs, sizeof(js_miniz_funcs)/sizeof(JSCFunctionListEntry));
JS_RETURN(export.val);
}

93
audit.ce Normal file
View File

@@ -0,0 +1,93 @@
// cell audit [<locator>] - Test-compile all .ce and .cm scripts
//
// Usage:
// cell audit Audit all packages
// cell audit <locator> Audit specific package
// cell audit . Audit current directory package
//
// Compiles every script in the package(s) to check for errors.
// Continues past failures and reports all issues at the end.
var shop = use('internal/shop')
var pkg = use('package')
var target_package = null
var i = 0
var run = function() {
for (i = 0; i < length(args); i++) {
if (args[i] == '--help' || args[i] == '-h') {
log.console("Usage: cell audit [<locator>]")
log.console("")
log.console("Test-compile all .ce and .cm scripts in package(s).")
log.console("Reports all errors without stopping at the first failure.")
return
} else if (!starts_with(args[i], '-')) {
target_package = args[i]
}
}
// Resolve local paths
if (target_package) {
target_package = shop.resolve_locator(target_package)
}
var packages = null
var total_ok = 0
var total_errors = 0
var total_scripts = 0
var all_failures = []
var all_unresolved = []
if (target_package) {
packages = [target_package]
} else {
packages = shop.list_packages()
}
arrfor(packages, function(p) {
var scripts = shop.get_package_scripts(p)
if (length(scripts) == 0) return
log.console("Auditing " + p + " (" + text(length(scripts)) + " scripts)...")
var result = shop.build_package_scripts(p)
total_ok = total_ok + result.ok
total_errors = total_errors + length(result.errors)
total_scripts = total_scripts + result.total
arrfor(result.errors, function(e) {
push(all_failures, p + ": " + e)
})
// Check use() resolution
var resolution = shop.audit_use_resolution(p)
arrfor(resolution.unresolved, function(u) {
push(all_unresolved, p + '/' + u.script + ": use('" + u.module + "') cannot be resolved")
})
})
log.console("")
if (length(all_failures) > 0) {
log.console("Failed scripts:")
arrfor(all_failures, function(f) {
log.console(" " + f)
})
log.console("")
}
if (length(all_unresolved) > 0) {
log.console("Unresolved modules:")
arrfor(all_unresolved, function(u) {
log.console(" " + u)
})
log.console("")
}
var summary = "Audit complete: " + text(total_ok) + "/" + text(total_scripts) + " scripts compiled"
if (total_errors > 0) summary = summary + ", " + text(total_errors) + " failed"
if (length(all_unresolved) > 0) summary = summary + ", " + text(length(all_unresolved)) + " unresolved use() calls"
log.console(summary)
}
run()
$stop()

315
bench.ce
View File

@@ -1,18 +1,39 @@
// cell bench - Run benchmarks with statistical analysis
var shop = use('internal/shop')
var pkg = use('package')
var fd = use('fd')
var time = use('time')
var json = use('json')
var blob = use('blob')
var os = use('os')
var os = use('internal/os')
var testlib = use('internal/testlib')
var math = use('math/radians')
if (!args) args = []
var _args = args == null ? [] : args
var target_pkg = null // null = current package
var target_bench = null // null = all benchmarks, otherwise specific bench file
var all_pkgs = false
var bench_mode = "bytecode" // "bytecode", "native", or "compare"
// Strip mode flags from args before parsing
function strip_mode_flags() {
var filtered = []
arrfor(_args, function(a) {
if (a == '--native') {
bench_mode = "native"
} else if (a == '--bytecode') {
bench_mode = "bytecode"
} else if (a == '--compare') {
bench_mode = "compare"
} else {
push(filtered, a)
}
})
_args = filtered
}
strip_mode_flags()
// Benchmark configuration
def WARMUP_BATCHES = 3
@@ -55,14 +76,19 @@ function stddev(arr, mean_val) {
function percentile(arr, p) {
if (length(arr) == 0) return 0
var sorted = sort(arr)
var idx = floor(arr) * p / 100
var idx = floor(length(arr) * p / 100)
if (idx >= length(arr)) idx = length(arr) - 1
return sorted[idx]
}
// Parse arguments similar to test.ce
function parse_args() {
if (length(args) == 0) {
var name = null
var lock = null
var resolved = null
var bench_path = null
if (length(_args) == 0) {
if (!testlib.is_valid_package('.')) {
log.console('No cell.toml found in current directory')
return false
@@ -71,7 +97,7 @@ function parse_args() {
return true
}
if (args[0] == 'all') {
if (_args[0] == 'all') {
if (!testlib.is_valid_package('.')) {
log.console('No cell.toml found in current directory')
return false
@@ -80,28 +106,28 @@ function parse_args() {
return true
}
if (args[0] == 'package') {
if (length(args) < 2) {
if (_args[0] == 'package') {
if (length(_args) < 2) {
log.console('Usage: cell bench package <name> [bench]')
log.console(' cell bench package all')
return false
}
if (args[1] == 'all') {
if (_args[1] == 'all') {
all_pkgs = true
log.console('Benchmarking all packages...')
return true
}
var name = args[1]
var lock = shop.load_lock()
name = _args[1]
lock = shop.load_lock()
if (lock[name]) {
target_pkg = name
} else if (starts_with(name, '/') && testlib.is_valid_package(name)) {
target_pkg = name
} else {
if (testlib.is_valid_package('.')) {
var resolved = pkg.alias_to_package(null, name)
resolved = pkg.alias_to_package(null, name)
if (resolved) {
target_pkg = resolved
} else {
@@ -114,8 +140,8 @@ function parse_args() {
}
}
if (length(args) >= 3) {
target_bench = args[2]
if (length(_args) >= 3) {
target_bench = _args[2]
}
log.console(`Benchmarking package: ${target_pkg}`)
@@ -123,7 +149,7 @@ function parse_args() {
}
// cell bench benches/suite or cell bench <path>
var bench_path = args[0]
bench_path = _args[0]
// Normalize path - add benches/ prefix if not present
if (!starts_with(bench_path, 'benches/') && !starts_with(bench_path, '/')) {
@@ -160,12 +186,15 @@ function collect_benches(package_name, specific_bench) {
var files = pkg.list_files(package_name)
var bench_files = []
arrfor(files, function(f) {
var bench_name = null
var match_name = null
var match_base = null
if (starts_with(f, "benches/") && ends_with(f, ".cm")) {
if (specific_bench) {
var bench_name = text(f, 0, -3)
var match_name = specific_bench
bench_name = text(f, 0, -3)
match_name = specific_bench
if (!starts_with(match_name, 'benches/')) match_name = 'benches/' + match_name
var match_base = ends_with(match_name, '.cm') ? text(match_name, 0, -3) : match_name
match_base = ends_with(match_name, '.cm') ? text(match_name, 0, -3) : match_name
if (bench_name != match_base) return
}
push(bench_files, f)
@@ -180,24 +209,25 @@ function calibrate_batch_size(bench_fn, is_batch) {
var n = MIN_BATCH_SIZE
var dt = 0
var start = 0
var new_n = 0
var calc = 0
var target_n = 0
// Find a batch size that takes at least MIN_SAMPLE_NS
while (n < MAX_BATCH_SIZE) {
// Ensure n is a valid number before calling
if (!is_number(n) || n < 1) {
n = 1
break
}
var start = os.now()
start = os.now()
bench_fn(n)
dt = os.now() - start
if (dt >= MIN_SAMPLE_NS) break
// Double the batch size
var new_n = n * 2
// Check if multiplication produced a valid number
new_n = n * 2
if (!is_number(new_n) || new_n > MAX_BATCH_SIZE) {
n = MAX_BATCH_SIZE
break
@@ -207,10 +237,9 @@ function calibrate_batch_size(bench_fn, is_batch) {
// Adjust to target sample duration
if (dt > 0 && dt < TARGET_SAMPLE_NS && is_number(n) && is_number(dt)) {
var calc = n * TARGET_SAMPLE_NS / dt
calc = n * TARGET_SAMPLE_NS / dt
if (is_number(calc) && calc > 0) {
var target_n = floor(calc)
// Check if floor returned a valid number
target_n = floor(calc)
if (is_number(target_n) && target_n > 0) {
if (target_n > MAX_BATCH_SIZE) target_n = MAX_BATCH_SIZE
if (target_n < MIN_BATCH_SIZE) target_n = MIN_BATCH_SIZE
@@ -219,7 +248,6 @@ function calibrate_batch_size(bench_fn, is_batch) {
}
}
// Safety check - ensure we always return a valid batch size
if (!is_number(n) || n < 1) {
n = 1
}
@@ -230,72 +258,70 @@ function calibrate_batch_size(bench_fn, is_batch) {
// Run a single benchmark function
function run_single_bench(bench_fn, bench_name) {
var timings_per_op = []
// Detect benchmark format:
// 1. Object with { setup, run, teardown } - structured format
// 2. Function that accepts (n) - batch format
// 3. Function that accepts () - legacy format
var is_structured = is_object(bench_fn) && bench_fn.run
var is_batch = false
var batch_size = 1
var setup_fn = null
var run_fn = null
var teardown_fn = null
var calibrate_fn = null
var _detect = null
var i = 0
var state = null
var start = 0
var duration = 0
var ns_per_op = 0
if (is_structured) {
setup_fn = bench_fn.setup || function() { return null }
run_fn = bench_fn.run
teardown_fn = bench_fn.teardown || function(state) {}
teardown_fn = bench_fn.teardown || function(s) {}
// Check if run function accepts batch size
try {
_detect = function() {
var test_state = setup_fn()
run_fn(1, test_state)
is_batch = true
if (teardown_fn) teardown_fn(test_state)
} catch (e) {
} disruption {
is_batch = false
}
_detect()
// Create wrapper for calibration
var calibrate_fn = function(n) {
var state = setup_fn()
run_fn(n, state)
if (teardown_fn) teardown_fn(state)
calibrate_fn = function(n) {
var s = setup_fn()
run_fn(n, s)
if (teardown_fn) teardown_fn(s)
}
batch_size = calibrate_batch_size(calibrate_fn, is_batch)
// Safety check for structured benchmarks
if (!is_number(batch_size) || batch_size < 1) {
batch_size = 1
}
} else {
// Simple function format
try {
_detect = function() {
bench_fn(1)
is_batch = true
} catch (e) {
} disruption {
is_batch = false
}
_detect()
batch_size = calibrate_batch_size(bench_fn, is_batch)
}
// Safety check - ensure batch_size is valid
if (!batch_size || batch_size < 1) {
batch_size = 1
}
// Warmup phase
for (var i = 0; i < WARMUP_BATCHES; i++) {
// Ensure batch_size is valid before warmup
for (i = 0; i < WARMUP_BATCHES; i++) {
if (!is_number(batch_size) || batch_size < 1) {
var type_str = is_null(batch_size) ? 'null' : is_number(batch_size) ? 'number' : is_text(batch_size) ? 'text' : is_object(batch_size) ? 'object' : is_array(batch_size) ? 'array' : is_function(batch_size) ? 'function' : is_logical(batch_size) ? 'logical' : 'unknown'
log.console(`WARNING: batch_size became ${type_str} = ${batch_size}, resetting to 1`)
batch_size = 1
}
if (is_structured) {
var state = setup_fn()
state = setup_fn()
if (is_batch) {
run_fn(batch_size, state)
} else {
@@ -312,35 +338,34 @@ function run_single_bench(bench_fn, bench_name) {
}
// Measurement phase - collect SAMPLES timing samples
for (var i = 0; i < SAMPLES; i++) {
// Double-check batch_size is valid (should never happen, but defensive)
for (i = 0; i < SAMPLES; i++) {
if (!is_number(batch_size) || batch_size < 1) {
batch_size = 1
}
if (is_structured) {
var state = setup_fn()
var start = os.now()
state = setup_fn()
start = os.now()
if (is_batch) {
run_fn(batch_size, state)
} else {
run_fn(state)
}
var duration = os.now() - start
duration = os.now() - start
if (teardown_fn) teardown_fn(state)
var ns_per_op = is_batch ? duration / batch_size : duration
ns_per_op = is_batch ? duration / batch_size : duration
push(timings_per_op, ns_per_op)
} else {
var start = os.now()
start = os.now()
if (is_batch) {
bench_fn(batch_size)
} else {
bench_fn()
}
var duration = os.now() - start
duration = os.now() - start
var ns_per_op = is_batch ? duration / batch_size : duration
ns_per_op = is_batch ? duration / batch_size : duration
push(timings_per_op, ns_per_op)
}
}
@@ -354,7 +379,6 @@ function run_single_bench(bench_fn, bench_name) {
var p95_ns = percentile(timings_per_op, 95)
var p99_ns = percentile(timings_per_op, 99)
// Calculate ops/s from median
var ops_per_sec = 0
if (median_ns > 0) {
ops_per_sec = floor(1000000000 / median_ns)
@@ -391,6 +415,53 @@ function format_ops(ops) {
return `${round(ops / 1000000000 * 100) / 100}G ops/s`
}
// Load a module for benchmarking in the given mode
// Returns the module value, or null on failure
function resolve_bench_load(f, package_name) {
var mod_path = text(f, 0, -3)
var use_pkg = package_name ? package_name : fd.realpath('.')
var prefix = testlib.get_pkg_dir(package_name)
var src_path = prefix + '/' + f
return {mod_path, use_pkg, src_path}
}
function load_bench_module_native(f, package_name) {
var r = resolve_bench_load(f, package_name)
return shop.use_native(r.src_path, r.use_pkg)
}
function load_bench_module(f, package_name, mode) {
var r = resolve_bench_load(f, package_name)
if (mode == "native") {
return load_bench_module_native(f, package_name)
}
return shop.use(r.mod_path, r.use_pkg)
}
// Collect benchmark functions from a loaded module
function collect_bench_fns(bench_mod) {
var benches = []
if (is_function(bench_mod)) {
push(benches, {name: 'main', fn: bench_mod})
} else if (is_object(bench_mod)) {
arrfor(array(bench_mod), function(k) {
if (is_function(bench_mod[k]))
push(benches, {name: k, fn: bench_mod[k]})
})
}
return benches
}
// Print results for a single benchmark
function print_bench_result(result, label) {
var prefix = label ? `[${label}] ` : ''
log.console(` ${prefix}${format_ns(result.median_ns)}/op ${format_ops(result.ops_per_sec)}`)
log.console(` ${prefix}min: ${format_ns(result.min_ns)} max: ${format_ns(result.max_ns)} stddev: ${format_ns(result.stddev_ns)}`)
if (result.batch_size > 1) {
log.console(` ${prefix}batch: ${result.batch_size} samples: ${result.samples}`)
}
}
// Run benchmarks for a package
function run_benchmarks(package_name, specific_bench) {
var bench_files = collect_benches(package_name, specific_bench)
@@ -403,66 +474,117 @@ function run_benchmarks(package_name, specific_bench) {
if (length(bench_files) == 0) return pkg_result
if (package_name) log.console(`Running benchmarks for ${package_name}`)
else log.console(`Running benchmarks for local package`)
var mode_label = bench_mode == "compare" ? "bytecode vs native" : bench_mode
if (package_name) log.console(`Running benchmarks for ${package_name} (${mode_label})`)
else log.console(`Running benchmarks for local package (${mode_label})`)
arrfor(bench_files, function(f) {
var mod_path = text(f, 0, -3)
var load_error = false
var benches = []
var native_benches = []
var bench_mod = null
var native_mod = null
var error_result = null
var file_result = {
name: f,
benchmarks: []
}
try {
var bench_mod
var use_pkg = package_name ? package_name : fd.realpath('.')
bench_mod = shop.use(mod_path, use_pkg)
var benches = []
if (is_function(bench_mod)) {
push(benches, {name: 'main', fn: bench_mod})
} else if (is_object(bench_mod)) {
arrfor(array(bench_mod), function(k) {
if (is_function(bench_mod[k]))
push(benches, {name: k, fn: bench_mod[k]})
})
var _load_file = function() {
var _load_native = null
if (bench_mode == "compare") {
bench_mod = load_bench_module(f, package_name, "bytecode")
benches = collect_bench_fns(bench_mod)
_load_native = function() {
native_mod = load_bench_module(f, package_name, "native")
native_benches = collect_bench_fns(native_mod)
} disruption {
log.console(` ${f}: native compilation failed, comparing skipped`)
native_benches = []
}
_load_native()
} else {
bench_mod = load_bench_module(f, package_name, bench_mode)
benches = collect_bench_fns(bench_mod)
}
if (length(benches) > 0) {
log.console(` ${f}`)
arrfor(benches, function(b) {
try {
var result = run_single_bench(b.fn, b.name)
var bench_error = false
var result = null
var nat_b = null
var nat_error = false
var nat_result = null
var _run_bench = function() {
var speedup = 0
var _run_nat = null
result = run_single_bench(b.fn, b.name)
result.package = pkg_result.package
result.mode = bench_mode == "compare" ? "bytecode" : bench_mode
push(file_result.benchmarks, result)
pkg_result.total++
log.console(` ${result.name}`)
log.console(` ${format_ns(result.median_ns)}/op ${format_ops(result.ops_per_sec)}`)
log.console(` min: ${format_ns(result.min_ns)} max: ${format_ns(result.max_ns)} stddev: ${format_ns(result.stddev_ns)}`)
if (result.batch_size > 1) {
log.console(` batch: ${result.batch_size} samples: ${result.samples}`)
if (bench_mode == "compare") {
print_bench_result(result, "bytecode")
// Find matching native bench and run it
nat_b = find(native_benches, function(nb) { return nb.name == b.name })
if (nat_b != null) {
_run_nat = function() {
nat_result = run_single_bench(native_benches[nat_b].fn, b.name)
nat_result.package = pkg_result.package
nat_result.mode = "native"
push(file_result.benchmarks, nat_result)
pkg_result.total++
print_bench_result(nat_result, "native ")
if (nat_result.median_ns > 0) {
speedup = result.median_ns / nat_result.median_ns
log.console(` speedup: ${round(speedup * 100) / 100}x`)
}
} disruption {
nat_error = true
}
_run_nat()
if (nat_error) {
log.console(` [native ] ERROR`)
}
} else {
log.console(` [native ] (no matching function)`)
}
} else {
print_bench_result(result, null)
}
} catch (e) {
log.console(` ERROR ${b.name}: ${e}`)
log.error(e)
var error_result = {
} disruption {
bench_error = true
}
_run_bench()
if (bench_error) {
log.console(` ERROR ${b.name}`)
error_result = {
package: pkg_result.package,
name: b.name,
error: e.toString()
error: "benchmark disrupted"
}
push(file_result.benchmarks, error_result)
pkg_result.total++
}
})
}
} catch (e) {
log.console(` Error loading ${f}: ${e}`)
var error_result = {
} disruption {
load_error = true
}
_load_file()
if (load_error) {
log.console(` Error loading ${f}`)
error_result = {
package: pkg_result.package,
name: "load_module",
error: `Error loading module: ${e}`
error: "error loading module"
}
push(file_result.benchmarks, error_result)
pkg_result.total++
@@ -478,15 +600,16 @@ function run_benchmarks(package_name, specific_bench) {
// Run all benchmarks
var all_results = []
var packages = null
if (all_pkgs) {
if (testlib.is_valid_package('.')) {
push(all_results, run_benchmarks(null, null))
}
var packages = shop.list_packages()
arrfor(packages, function(pkg) {
push(all_results, run_benchmarks(pkg, null))
packages = shop.list_packages()
arrfor(packages, function(p) {
push(all_results, run_benchmarks(p, null))
})
} else {
push(all_results, run_benchmarks(target_pkg, target_bench))
@@ -507,8 +630,10 @@ function generate_reports() {
var report_dir = shop.get_reports_dir() + '/bench_' + timestamp
testlib.ensure_dir(report_dir)
var mode_str = bench_mode == "compare" ? "bytecode vs native" : bench_mode
var txt_report = `BENCHMARK REPORT
Date: ${time.text(time.number())}
Mode: ${mode_str}
Total benchmarks: ${total_benches}
=== SUMMARY ===
@@ -519,10 +644,11 @@ Total benchmarks: ${total_benches}
arrfor(pkg_res.files, function(f) {
txt_report += ` ${f.name}\n`
arrfor(f.benchmarks, function(b) {
var mode_tag = b.mode ? ` [${b.mode}]` : ''
if (b.error) {
txt_report += ` ERROR ${b.name}: ${b.error}\n`
} else {
txt_report += ` ${b.name}: ${format_ns(b.median_ns)}/op (${format_ops(b.ops_per_sec)})\n`
txt_report += ` ${b.name}${mode_tag}: ${format_ns(b.median_ns)}/op (${format_ops(b.ops_per_sec)})\n`
}
})
})
@@ -536,7 +662,8 @@ Total benchmarks: ${total_benches}
arrfor(f.benchmarks, function(b) {
if (b.error) return
txt_report += `\n${pkg_res.package}::${b.name}\n`
var detail_mode = b.mode ? ` [${b.mode}]` : ''
txt_report += `\n${pkg_res.package}::${b.name}${detail_mode}\n`
txt_report += ` batch_size: ${b.batch_size} samples: ${b.samples}\n`
txt_report += ` median: ${format_ns(b.median_ns)}/op\n`
txt_report += ` mean: ${format_ns(b.mean_ns)}/op\n`

86
bench_arith.ce Normal file
View File

@@ -0,0 +1,86 @@
// bench_arith.ce — arithmetic and number crunching benchmark
// Tests: integer add/mul, float ops, loop counter overhead, conditionals
var time = use('time')
def iterations = 2000000
// 1. Integer sum in tight loop
function bench_int_sum() {
var i = 0
var s = 0
for (i = 0; i < iterations; i++) {
s = s + i
}
return s
}
// 2. Integer multiply + mod (sieve-like)
function bench_int_mul_mod() {
var i = 0
var s = 0
for (i = 1; i < iterations; i++) {
s = s + (i * 7 % 1000)
}
return s
}
// 3. Float math — accumulate with division
function bench_float_arith() {
var i = 0
var s = 0.5
for (i = 1; i < iterations; i++) {
s = s + 1.0 / i
}
return s
}
// 4. Nested loop with branch (fizzbuzz-like counter)
function bench_branch() {
var i = 0
var fizz = 0
var buzz = 0
var fizzbuzz = 0
for (i = 1; i <= iterations; i++) {
if (i % 15 == 0) {
fizzbuzz = fizzbuzz + 1
} else if (i % 3 == 0) {
fizz = fizz + 1
} else if (i % 5 == 0) {
buzz = buzz + 1
}
}
return fizz + buzz + fizzbuzz
}
// 5. Nested loop (small inner)
function bench_nested() {
var i = 0
var j = 0
var s = 0
def outer = 5000
def inner = 5000
for (i = 0; i < outer; i++) {
for (j = 0; j < inner; j++) {
s = s + 1
}
}
return s
}
// Run each and print timing
function run(name, fn) {
var start = time.number()
var result = fn()
var elapsed = time.number() - start
var ms = whole(elapsed * 100000) / 100
log.console(` ${name}: ${ms} ms (result: ${result})`)
}
log.console("=== Arithmetic Benchmark ===")
log.console(` iterations: ${iterations}`)
run("int_sum ", bench_int_sum)
run("int_mul_mod ", bench_int_mul_mod)
run("float_arith ", bench_float_arith)
run("branch ", bench_branch)
run("nested_loop ", bench_nested)

67
bench_arith.js Normal file
View File

@@ -0,0 +1,67 @@
// bench_arith.js — arithmetic and number crunching benchmark (QuickJS)
const iterations = 2000000;
function bench_int_sum() {
let s = 0;
for (let i = 0; i < iterations; i++) {
s = s + i;
}
return s;
}
function bench_int_mul_mod() {
let s = 0;
for (let i = 1; i < iterations; i++) {
s = s + (i * 7 % 1000);
}
return s;
}
function bench_float_arith() {
let s = 0.5;
for (let i = 1; i < iterations; i++) {
s = s + 1.0 / i;
}
return s;
}
function bench_branch() {
let fizz = 0, buzz = 0, fizzbuzz = 0;
for (let i = 1; i <= iterations; i++) {
if (i % 15 === 0) {
fizzbuzz = fizzbuzz + 1;
} else if (i % 3 === 0) {
fizz = fizz + 1;
} else if (i % 5 === 0) {
buzz = buzz + 1;
}
}
return fizz + buzz + fizzbuzz;
}
function bench_nested() {
let s = 0;
const outer = 5000, inner = 5000;
for (let i = 0; i < outer; i++) {
for (let j = 0; j < inner; j++) {
s = s + 1;
}
}
return s;
}
function run(name, fn) {
const start = performance.now();
const result = fn();
const elapsed = performance.now() - start;
console.log(` ${name}: ${elapsed.toFixed(2)} ms (result: ${result})`);
}
console.log("=== Arithmetic Benchmark ===");
console.log(` iterations: ${iterations}`);
run("int_sum ", bench_int_sum);
run("int_mul_mod ", bench_int_mul_mod);
run("float_arith ", bench_float_arith);
run("branch ", bench_branch);
run("nested_loop ", bench_nested);

68
bench_arith.lua Normal file
View File

@@ -0,0 +1,68 @@
-- bench_arith.lua — arithmetic and number crunching benchmark (Lua)
local iterations = 2000000
local clock = os.clock
local function bench_int_sum()
local s = 0
for i = 0, iterations - 1 do
s = s + i
end
return s
end
local function bench_int_mul_mod()
local s = 0
for i = 1, iterations - 1 do
s = s + (i * 7 % 1000)
end
return s
end
local function bench_float_arith()
local s = 0.5
for i = 1, iterations - 1 do
s = s + 1.0 / i
end
return s
end
local function bench_branch()
local fizz, buzz, fizzbuzz = 0, 0, 0
for i = 1, iterations do
if i % 15 == 0 then
fizzbuzz = fizzbuzz + 1
elseif i % 3 == 0 then
fizz = fizz + 1
elseif i % 5 == 0 then
buzz = buzz + 1
end
end
return fizz + buzz + fizzbuzz
end
local function bench_nested()
local s = 0
local outer, inner = 5000, 5000
for i = 0, outer - 1 do
for j = 0, inner - 1 do
s = s + 1
end
end
return s
end
local function run(name, fn)
local start = clock()
local result = fn()
local elapsed = (clock() - start) * 1000
print(string.format(" %s: %.2f ms (result: %s)", name, elapsed, tostring(result)))
end
print("=== Arithmetic Benchmark ===")
print(string.format(" iterations: %d", iterations))
run("int_sum ", bench_int_sum)
run("int_mul_mod ", bench_int_mul_mod)
run("float_arith ", bench_float_arith)
run("branch ", bench_branch)
run("nested_loop ", bench_nested)

113
bench_array.ce Normal file
View File

@@ -0,0 +1,113 @@
// bench_array.ce — array operation benchmark
// Tests: sequential access, push/build, index write, sum reduction, sort
var time = use('time')
def size = 100000
// 1. Build array with push
function bench_push() {
var a = []
var i = 0
for (i = 0; i < size; i++) {
a[] = i
}
return length(a)
}
// 2. Index write into preallocated array
function bench_index_write() {
var a = array(size, 0)
var i = 0
for (i = 0; i < size; i++) {
a[i] = i
}
return a[size - 1]
}
// 3. Sequential read and sum
function bench_seq_read() {
var a = array(size, 0)
var i = 0
for (i = 0; i < size; i++) {
a[i] = i
}
var s = 0
for (i = 0; i < size; i++) {
s = s + a[i]
}
return s
}
// 4. Reverse array in-place
function bench_reverse() {
var a = array(size, 0)
var i = 0
for (i = 0; i < size; i++) {
a[i] = i
}
var lo = 0
var hi = size - 1
var tmp = 0
while (lo < hi) {
tmp = a[lo]
a[lo] = a[hi]
a[hi] = tmp
lo = lo + 1
hi = hi - 1
}
return a[0]
}
// 5. Nested array access (matrix-like, 300x300)
function bench_matrix() {
def n = 300
var mat = array(n, null)
var i = 0
var j = 0
for (i = 0; i < n; i++) {
mat[i] = array(n, 0)
for (j = 0; j < n; j++) {
mat[i][j] = i * n + j
}
}
// sum diagonal
var s = 0
for (i = 0; i < n; i++) {
s = s + mat[i][i]
}
return s
}
// 6. filter-like: count evens
function bench_filter_count() {
var a = array(size, 0)
var i = 0
for (i = 0; i < size; i++) {
a[i] = i
}
var count = 0
for (i = 0; i < size; i++) {
if (a[i] % 2 == 0) {
count = count + 1
}
}
return count
}
function run(name, fn) {
var start = time.number()
var result = fn()
var elapsed = time.number() - start
var ms = whole(elapsed * 100000) / 100
log.console(` ${name}: ${ms} ms (result: ${result})`)
}
log.console("=== Array Benchmark ===")
log.console(` size: ${size}`)
run("push ", bench_push)
run("index_write ", bench_index_write)
run("seq_read_sum ", bench_seq_read)
run("reverse ", bench_reverse)
run("matrix_300 ", bench_matrix)
run("filter_count ", bench_filter_count)

93
bench_array.js Normal file
View File

@@ -0,0 +1,93 @@
// bench_array.js — array operation benchmark (QuickJS)
const size = 100000;
function bench_push() {
let a = [];
for (let i = 0; i < size; i++) {
a.push(i);
}
return a.length;
}
function bench_index_write() {
let a = new Array(size).fill(0);
for (let i = 0; i < size; i++) {
a[i] = i;
}
return a[size - 1];
}
function bench_seq_read() {
let a = new Array(size).fill(0);
for (let i = 0; i < size; i++) {
a[i] = i;
}
let s = 0;
for (let i = 0; i < size; i++) {
s = s + a[i];
}
return s;
}
function bench_reverse() {
let a = new Array(size).fill(0);
for (let i = 0; i < size; i++) {
a[i] = i;
}
let lo = 0, hi = size - 1, tmp;
while (lo < hi) {
tmp = a[lo];
a[lo] = a[hi];
a[hi] = tmp;
lo = lo + 1;
hi = hi - 1;
}
return a[0];
}
function bench_matrix() {
const n = 300;
let mat = new Array(n);
for (let i = 0; i < n; i++) {
mat[i] = new Array(n).fill(0);
for (let j = 0; j < n; j++) {
mat[i][j] = i * n + j;
}
}
let s = 0;
for (let i = 0; i < n; i++) {
s = s + mat[i][i];
}
return s;
}
function bench_filter_count() {
let a = new Array(size).fill(0);
for (let i = 0; i < size; i++) {
a[i] = i;
}
let count = 0;
for (let i = 0; i < size; i++) {
if (a[i] % 2 === 0) {
count = count + 1;
}
}
return count;
}
function run(name, fn) {
const start = performance.now();
const result = fn();
const elapsed = performance.now() - start;
console.log(` ${name}: ${elapsed.toFixed(2)} ms (result: ${result})`);
}
console.log("=== Array Benchmark ===");
console.log(` size: ${size}`);
run("push ", bench_push);
run("index_write ", bench_index_write);
run("seq_read_sum ", bench_seq_read);
run("reverse ", bench_reverse);
run("matrix_300 ", bench_matrix);
run("filter_count ", bench_filter_count);

93
bench_array.lua Normal file
View File

@@ -0,0 +1,93 @@
-- bench_array.lua — array operation benchmark (Lua)
local size = 100000
local clock = os.clock
local function bench_push()
local a = {}
for i = 0, size - 1 do
a[#a + 1] = i
end
return #a
end
local function bench_index_write()
local a = {}
for i = 1, size do a[i] = 0 end
for i = 1, size do
a[i] = i - 1
end
return a[size]
end
local function bench_seq_read()
local a = {}
for i = 1, size do
a[i] = i - 1
end
local s = 0
for i = 1, size do
s = s + a[i]
end
return s
end
local function bench_reverse()
local a = {}
for i = 1, size do
a[i] = i - 1
end
local lo, hi = 1, size
while lo < hi do
a[lo], a[hi] = a[hi], a[lo]
lo = lo + 1
hi = hi - 1
end
return a[1]
end
local function bench_matrix()
local n = 300
local mat = {}
for i = 1, n do
mat[i] = {}
for j = 1, n do
mat[i][j] = (i - 1) * n + (j - 1)
end
end
local s = 0
for i = 1, n do
s = s + mat[i][i]
end
return s
end
local function bench_filter_count()
local a = {}
for i = 1, size do
a[i] = i - 1
end
local count = 0
for i = 1, size do
if a[i] % 2 == 0 then
count = count + 1
end
end
return count
end
local function run(name, fn)
local start = clock()
local result = fn()
local elapsed = (clock() - start) * 1000
print(string.format(" %s: %.2f ms (result: %s)", name, elapsed, tostring(result)))
end
print("=== Array Benchmark ===")
print(string.format(" size: %d", size))
run("push ", bench_push)
run("index_write ", bench_index_write)
run("seq_read_sum ", bench_seq_read)
run("reverse ", bench_reverse)
run("matrix_300 ", bench_matrix)
run("filter_count ", bench_filter_count)

21
bench_fib.ce Normal file
View File

@@ -0,0 +1,21 @@
var time = use('time')
function fib(n) {
if (n < 2) {
return n
}
return fib(n - 1) + fib(n - 2)
}
function run(name, fn) {
var start = time.number()
var result = fn()
var elapsed = time.number() - start
var ms = whole(elapsed * 100000) / 100
log.console(` ${name}: ${ms} ms (result: ${result})`)
}
log.console("=== Cell fib ===")
run("fib(25)", function() { return fib(25) })
run("fib(30)", function() { return fib(30) })
run("fib(35)", function() { return fib(35) })

194
bench_native.ce Normal file
View File

@@ -0,0 +1,194 @@
// bench_native.ce — compare VM vs native execution speed
//
// Usage:
// cell --dev bench_native.ce <module.cm> [iterations]
//
// Compiles (if needed) and benchmarks a module via both VM and native dylib.
// Reports median/mean timing per benchmark + speedup ratio.
var os = use('internal/os')
var fd = use('fd')
if (length(args) < 1) {
log.bench('usage: cell --dev bench_native.ce <module.cm> [iterations]')
return
}
var file = args[0]
var name = file
if (ends_with(name, '.cm')) {
name = text(name, 0, length(name) - 3)
}
var iterations = 11
if (length(args) > 1) {
iterations = number(args[1])
}
def WARMUP = 3
var safe = replace(replace(name, '/', '_'), '-', '_')
var symbol = 'js_' + safe + '_use'
var dylib_path = './' + file + '.dylib'
// --- Statistics ---
var stat_sort = function(arr) {
return sort(arr)
}
var stat_median = function(arr) {
if (length(arr) == 0) return 0
var sorted = stat_sort(arr)
var mid = floor(length(arr) / 2)
if (length(arr) % 2 == 0) {
return (sorted[mid - 1] + sorted[mid]) / 2
}
return sorted[mid]
}
var stat_mean = function(arr) {
if (length(arr) == 0) return 0
var sum = reduce(arr, function(a, b) { return a + b })
return sum / length(arr)
}
var format_ns = function(ns) {
if (ns < 1000) return text(round(ns)) + 'ns'
if (ns < 1000000) return text(round(ns / 1000 * 100) / 100) + 'us'
if (ns < 1000000000) return text(round(ns / 1000000 * 100) / 100) + 'ms'
return text(round(ns / 1000000000 * 100) / 100) + 's'
}
// --- Collect benchmarks from module ---
var collect_benches = function(mod) {
var benches = []
var keys = null
var i = 0
var k = null
if (is_function(mod)) {
push(benches, {name: 'main', fn: mod})
} else if (is_object(mod)) {
keys = array(mod)
i = 0
while (i < length(keys)) {
k = keys[i]
if (is_function(mod[k])) {
push(benches, {name: k, fn: mod[k]})
}
i = i + 1
}
}
return benches
}
// --- Run one benchmark function ---
var run_bench = function(fn, label) {
var samples = []
var i = 0
var t1 = 0
var t2 = 0
// warmup
i = 0
while (i < WARMUP) {
fn(1)
i = i + 1
}
// collect samples
i = 0
while (i < iterations) {
t1 = os.now()
fn(1)
t2 = os.now()
push(samples, t2 - t1)
i = i + 1
}
return {
label: label,
median: stat_median(samples),
mean: stat_mean(samples)
}
}
// --- Load VM module ---
log.bench('loading VM module: ' + file)
var vm_mod = use(name)
var vm_benches = collect_benches(vm_mod)
if (length(vm_benches) == 0) {
log.bench('no benchmarkable functions found in ' + file)
return
}
// --- Load native module ---
var native_mod = null
var native_benches = []
var has_native = fd.is_file(dylib_path)
var lib = null
if (has_native) {
log.bench('loading native module: ' + dylib_path)
lib = os.dylib_open(dylib_path)
native_mod = os.dylib_symbol(lib, symbol)
native_benches = collect_benches(native_mod)
} else {
log.bench('no ' + dylib_path + ' found -- VM-only benchmarking')
log.bench(' hint: cell --dev compile.ce ' + file)
}
// --- Run benchmarks ---
log.bench('')
log.bench('samples: ' + text(iterations) + ' (warmup: ' + text(WARMUP) + ')')
log.bench('')
var pad = function(s, n) {
var result = s
while (length(result) < n) result = result + ' '
return result
}
var i = 0
var b = null
var vm_result = null
var j = 0
var found = false
var nat_result = null
var speedup = 0
while (i < length(vm_benches)) {
b = vm_benches[i]
vm_result = run_bench(b.fn, 'vm')
log.bench(pad(b.name, 20) + ' VM: ' + pad(format_ns(vm_result.median), 12) + ' (median) ' + format_ns(vm_result.mean) + ' (mean)')
// find matching native bench
j = 0
found = false
while (j < length(native_benches)) {
if (native_benches[j].name == b.name) {
nat_result = run_bench(native_benches[j].fn, 'native')
log.bench(pad('', 20) + ' NT: ' + pad(format_ns(nat_result.median), 12) + ' (median) ' + format_ns(nat_result.mean) + ' (mean)')
if (nat_result.median > 0) {
speedup = vm_result.median / nat_result.median
log.bench(pad('', 20) + ' speedup: ' + text(round(speedup * 100) / 100) + 'x')
}
found = true
}
j = j + 1
}
if (has_native && !found) {
log.bench(pad('', 20) + ' NT: (no matching function)')
}
log.bench('')
i = i + 1
}

118
bench_object.ce Normal file
View File

@@ -0,0 +1,118 @@
// bench_object.ce — object/record and string benchmark
// Tests: property read/write, string concat, string interpolation, method-like dispatch
var time = use('time')
def iterations = 200000
// 1. Record create + property write
function bench_record_create() {
var i = 0
var r = null
for (i = 0; i < iterations; i++) {
r = {x: i, y: i + 1, z: i + 2}
}
return r.z
}
// 2. Property read in loop
function bench_prop_read() {
var obj = {x: 10, y: 20, z: 30, w: 40}
var i = 0
var s = 0
for (i = 0; i < iterations; i++) {
s = s + obj.x + obj.y + obj.z + obj.w
}
return s
}
// 3. Dynamic property access (computed keys)
function bench_dynamic_prop() {
var obj = {a: 1, b: 2, c: 3, d: 4, e: 5}
var keys = ["a", "b", "c", "d", "e"]
var i = 0
var j = 0
var s = 0
for (i = 0; i < iterations; i++) {
for (j = 0; j < 5; j++) {
s = s + obj[keys[j]]
}
}
return s
}
// 4. String concatenation
function bench_string_concat() {
var i = 0
var s = ""
def n = 10000
for (i = 0; i < n; i++) {
s = s + "x"
}
return length(s)
}
// 5. String interpolation
function bench_interpolation() {
var i = 0
var s = ""
def n = 50000
for (i = 0; i < n; i++) {
s = `item_${i}`
}
return s
}
// 6. Prototype chain / method-like call
function make_point(x, y) {
return {
x: x,
y: y,
sum: function(self) {
return self.x + self.y
}
}
}
function bench_method_call() {
var p = make_point(3, 4)
var i = 0
var s = 0
for (i = 0; i < iterations; i++) {
s = s + p.sum(p)
}
return s
}
// 7. Function call overhead (simple recursion depth)
function fib(n) {
if (n <= 1) return n
return fib(n - 1) + fib(n - 2)
}
function bench_fncall() {
var i = 0
var s = 0
for (i = 0; i < 20; i++) {
s = s + fib(25)
}
return s
}
function run(name, fn) {
var start = time.number()
var result = fn()
var elapsed = time.number() - start
var ms = whole(elapsed * 100000) / 100
log.console(` ${name}: ${ms} ms (result: ${result})`)
}
log.console("=== Object / String / Call Benchmark ===")
log.console(` iterations: ${iterations}`)
run("record_create ", bench_record_create)
run("prop_read ", bench_prop_read)
run("dynamic_prop ", bench_dynamic_prop)
run("string_concat ", bench_string_concat)
run("interpolation ", bench_interpolation)
run("method_call ", bench_method_call)
run("fncall_fib25 ", bench_fncall)

99
bench_object.js Normal file
View File

@@ -0,0 +1,99 @@
// bench_object.js — object/string/call benchmark (QuickJS)
const iterations = 200000;
function bench_record_create() {
let r;
for (let i = 0; i < iterations; i++) {
r = {x: i, y: i + 1, z: i + 2};
}
return r.z;
}
function bench_prop_read() {
const obj = {x: 10, y: 20, z: 30, w: 40};
let s = 0;
for (let i = 0; i < iterations; i++) {
s = s + obj.x + obj.y + obj.z + obj.w;
}
return s;
}
function bench_dynamic_prop() {
const obj = {a: 1, b: 2, c: 3, d: 4, e: 5};
const keys = ["a", "b", "c", "d", "e"];
let s = 0;
for (let i = 0; i < iterations; i++) {
for (let j = 0; j < 5; j++) {
s = s + obj[keys[j]];
}
}
return s;
}
function bench_string_concat() {
let s = "";
const n = 10000;
for (let i = 0; i < n; i++) {
s = s + "x";
}
return s.length;
}
function bench_interpolation() {
let s = "";
const n = 50000;
for (let i = 0; i < n; i++) {
s = `item_${i}`;
}
return s;
}
function make_point(x, y) {
return {
x: x,
y: y,
sum: function(self) {
return self.x + self.y;
}
};
}
function bench_method_call() {
const p = make_point(3, 4);
let s = 0;
for (let i = 0; i < iterations; i++) {
s = s + p.sum(p);
}
return s;
}
function fib(n) {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2);
}
function bench_fncall() {
let s = 0;
for (let i = 0; i < 20; i++) {
s = s + fib(25);
}
return s;
}
function run(name, fn) {
const start = performance.now();
const result = fn();
const elapsed = performance.now() - start;
console.log(` ${name}: ${elapsed.toFixed(2)} ms (result: ${result})`);
}
console.log("=== Object / String / Call Benchmark ===");
console.log(` iterations: ${iterations}`);
run("record_create ", bench_record_create);
run("prop_read ", bench_prop_read);
run("dynamic_prop ", bench_dynamic_prop);
run("string_concat ", bench_string_concat);
run("interpolation ", bench_interpolation);
run("method_call ", bench_method_call);
run("fncall_fib25 ", bench_fncall);

101
bench_object.lua Normal file
View File

@@ -0,0 +1,101 @@
-- bench_object.lua — object/string/call benchmark (Lua)
local iterations = 200000
local clock = os.clock
local function bench_record_create()
local r
for i = 0, iterations - 1 do
r = {x = i, y = i + 1, z = i + 2}
end
return r.z
end
local function bench_prop_read()
local obj = {x = 10, y = 20, z = 30, w = 40}
local s = 0
for i = 0, iterations - 1 do
s = s + obj.x + obj.y + obj.z + obj.w
end
return s
end
local function bench_dynamic_prop()
local obj = {a = 1, b = 2, c = 3, d = 4, e = 5}
local keys = {"a", "b", "c", "d", "e"}
local s = 0
for i = 0, iterations - 1 do
for j = 1, 5 do
s = s + obj[keys[j]]
end
end
return s
end
local function bench_string_concat()
local parts = {}
local n = 10000
for i = 1, n do
parts[i] = "x"
end
local s = table.concat(parts)
return #s
end
local function bench_interpolation()
local s = ""
local n = 50000
for i = 0, n - 1 do
s = string.format("item_%d", i)
end
return s
end
local function make_point(x, y)
return {
x = x,
y = y,
sum = function(self)
return self.x + self.y
end
}
end
local function bench_method_call()
local p = make_point(3, 4)
local s = 0
for i = 0, iterations - 1 do
s = s + p.sum(p)
end
return s
end
local function fib(n)
if n <= 1 then return n end
return fib(n - 1) + fib(n - 2)
end
local function bench_fncall()
local s = 0
for i = 0, 19 do
s = s + fib(25)
end
return s
end
local function run(name, fn)
local start = clock()
local result = fn()
local elapsed = (clock() - start) * 1000
print(string.format(" %s: %.2f ms (result: %s)", name, elapsed, tostring(result)))
end
print("=== Object / String / Call Benchmark ===")
print(string.format(" iterations: %d", iterations))
run("record_create ", bench_record_create)
run("prop_read ", bench_prop_read)
run("dynamic_prop ", bench_dynamic_prop)
run("string_concat ", bench_string_concat)
run("interpolation ", bench_interpolation)
run("method_call ", bench_method_call)
run("fncall_fib25 ", bench_fncall)

232
benches/actor_patterns.cm Normal file
View File

@@ -0,0 +1,232 @@
// actor_patterns.cm — Actor concurrency benchmarks
// Message passing, fan-out/fan-in, mailbox throughput.
// These use structured benchmarks with setup/run/teardown.
// Note: actor benchmarks are measured differently from pure compute.
// Each iteration sends messages and waits for results, so they're
// inherently slower but test real concurrency costs.
// Simple ping-pong: two actors sending messages back and forth
// Since we can't create real actors from a module, we simulate
// the message-passing patterns with function call overhead that
// mirrors what the actor dispatch does.
// Simulate message dispatch overhead
function make_mailbox() {
return {
queue: [],
delivered: 0
}
}
function send(mailbox, msg) {
push(mailbox.queue, msg)
return null
}
function receive(mailbox) {
if (length(mailbox.queue) == 0) return null
mailbox.delivered++
return pop(mailbox.queue)
}
function drain(mailbox) {
var count = 0
while (length(mailbox.queue) > 0) {
pop(mailbox.queue)
count++
}
return count
}
// Ping-pong: simulate two actors exchanging messages
function ping_pong(rounds) {
var box_a = make_mailbox()
var box_b = make_mailbox()
var i = 0
var msg = null
send(box_a, {type: "ping", val: 0})
for (i = 0; i < rounds; i++) {
// A receives and sends to B
msg = receive(box_a)
if (msg) {
send(box_b, {type: "pong", val: msg.val + 1})
}
// B receives and sends to A
msg = receive(box_b)
if (msg) {
send(box_a, {type: "ping", val: msg.val + 1})
}
}
return box_a.delivered + box_b.delivered
}
// Fan-out: one sender, N receivers
function fan_out(n_receivers, messages_per) {
var receivers = []
var i = 0
var j = 0
for (i = 0; i < n_receivers; i++) {
push(receivers, make_mailbox())
}
// Send messages to all receivers
for (j = 0; j < messages_per; j++) {
for (i = 0; i < n_receivers; i++) {
send(receivers[i], {seq: j, data: j * 17})
}
}
// All receivers drain
var total = 0
for (i = 0; i < n_receivers; i++) {
total += drain(receivers[i])
}
return total
}
// Fan-in: N senders, one receiver
function fan_in(n_senders, messages_per) {
var inbox = make_mailbox()
var i = 0
var j = 0
// Each sender sends messages
for (i = 0; i < n_senders; i++) {
for (j = 0; j < messages_per; j++) {
send(inbox, {sender: i, seq: j, data: i * 100 + j})
}
}
// Receiver processes all
var total = 0
var msg = null
msg = receive(inbox)
while (msg) {
total += msg.data
msg = receive(inbox)
}
return total
}
// Pipeline: chain of processors
function pipeline(stages, items) {
var boxes = []
var i = 0
var j = 0
var msg = null
for (i = 0; i <= stages; i++) {
push(boxes, make_mailbox())
}
// Feed input
for (i = 0; i < items; i++) {
send(boxes[0], {val: i})
}
// Process each stage
for (j = 0; j < stages; j++) {
msg = receive(boxes[j])
while (msg) {
send(boxes[j + 1], {val: msg.val * 2 + 1})
msg = receive(boxes[j])
}
}
// Drain output
var total = 0
msg = receive(boxes[stages])
while (msg) {
total += msg.val
msg = receive(boxes[stages])
}
return total
}
// Request-response pattern (simulate RPC)
function request_response(n_requests) {
var client_box = make_mailbox()
var server_box = make_mailbox()
var i = 0
var req = null
var resp = null
var total = 0
for (i = 0; i < n_requests; i++) {
// Client sends request
send(server_box, {id: i, payload: i * 3, reply_to: client_box})
// Server processes
req = receive(server_box)
if (req) {
send(req.reply_to, {id: req.id, result: req.payload * 2 + 1})
}
// Client receives response
resp = receive(client_box)
if (resp) {
total += resp.result
}
}
return total
}
return {
// Ping-pong: 10K rounds
ping_pong_10k: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += ping_pong(10000)
}
return x
},
// Fan-out: 100 receivers, 100 messages each
fan_out_100x100: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += fan_out(100, 100)
}
return x
},
// Fan-in: 100 senders, 100 messages each
fan_in_100x100: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += fan_in(100, 100)
}
return x
},
// Pipeline: 10 stages, 1000 items
pipeline_10x1k: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += pipeline(10, 1000)
}
return x
},
// Request-response: 5K requests
rpc_5k: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += request_response(5000)
}
return x
}
}

141
benches/cli_tool.cm Normal file
View File

@@ -0,0 +1,141 @@
// cli_tool.cm — CLI tool simulation (macro benchmark)
// Parse args + process data + transform + format output.
// Simulates a realistic small utility program.
var json = use('json')
// Generate fake records
function generate_records(n) {
var records = []
var x = 42
var i = 0
var status_vals = ["active", "inactive", "pending", "archived"]
var dept_vals = ["eng", "sales", "ops", "hr", "marketing"]
for (i = 0; i < n; i++) {
x = ((x * 1103515245 + 12345) & 0x7FFFFFFF) | 0
push(records, {
id: i + 1,
name: `user_${i}`,
score: (x % 1000) / 10,
status: status_vals[i % 4],
department: dept_vals[i % 5]
})
}
return records
}
// Filter records by field value
function filter_records(records, field, value) {
var result = []
var i = 0
for (i = 0; i < length(records); i++) {
if (records[i][field] == value) {
push(result, records[i])
}
}
return result
}
// Group by a field
function group_by(records, field) {
var groups = {}
var i = 0
var key = null
for (i = 0; i < length(records); i++) {
key = records[i][field]
if (!key) key = "unknown"
if (!groups[key]) groups[key] = []
push(groups[key], records[i])
}
return groups
}
// Aggregate: compute stats per group
function aggregate(groups) {
var keys = array(groups)
var result = []
var i = 0
var j = 0
var grp = null
var total = 0
var mn = 0
var mx = 0
for (i = 0; i < length(keys); i++) {
grp = groups[keys[i]]
total = 0
mn = 999999
mx = 0
for (j = 0; j < length(grp); j++) {
total += grp[j].score
if (grp[j].score < mn) mn = grp[j].score
if (grp[j].score > mx) mx = grp[j].score
}
push(result, {
group: keys[i],
count: length(grp),
average: total / length(grp),
low: mn,
high: mx
})
}
return result
}
// Full pipeline: load → filter → sort → group → aggregate → encode
function run_pipeline(n_records) {
// Generate data
var records = generate_records(n_records)
// Filter to active records
var filtered = filter_records(records, "status", "active")
// Sort by score
filtered = sort(filtered, "score")
// Limit to first 50
if (length(filtered) > 50) {
filtered = array(filtered, 0, 50)
}
// Group and aggregate
var groups = group_by(filtered, "department")
var stats = aggregate(groups)
stats = sort(stats, "average")
// Encode as JSON
var output = json.encode(stats)
return length(output)
}
return {
// Small dataset (100 records)
cli_pipeline_100: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += run_pipeline(100)
}
return x
},
// Medium dataset (1000 records)
cli_pipeline_1k: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += run_pipeline(1000)
}
return x
},
// Large dataset (10K records)
cli_pipeline_10k: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += run_pipeline(10000)
}
return x
}
}

162
benches/deltablue.cm Normal file
View File

@@ -0,0 +1,162 @@
// deltablue.cm — Constraint solver kernel (DeltaBlue-inspired)
// Dynamic dispatch, pointer chasing, object-heavy workload.
def REQUIRED = 0
def STRONG = 1
def NORMAL = 2
def WEAK = 3
def WEAKEST = 4
function make_variable(name, value) {
return {
name: name,
value: value,
constraints: [],
determined_by: null,
stay: true,
mark: 0
}
}
function make_constraint(strength, variables, satisfy_fn) {
return {
strength: strength,
variables: variables,
satisfy: satisfy_fn,
output: null
}
}
// Constraint propagation: simple forward solver
function propagate(vars, constraints) {
var changed = true
var passes = 0
var max_passes = length(constraints) * 3
var i = 0
var c = null
var old_val = 0
while (changed && passes < max_passes) {
changed = false
passes++
for (i = 0; i < length(constraints); i++) {
c = constraints[i]
old_val = c.output ? c.output.value : null
c.satisfy(c)
if (c.output && c.output.value != old_val) {
changed = true
}
}
}
return passes
}
// Build a chain of equality constraints: v[i] = v[i-1] + 1
function build_chain(n) {
var vars = []
var constraints = []
var i = 0
for (i = 0; i < n; i++) {
push(vars, make_variable(`v${i}`, 0))
}
// Set first variable
vars[0].value = 1
var c = null
for (i = 1; i < n; i++) {
c = make_constraint(NORMAL, [vars[i - 1], vars[i]], function(self) {
self.variables[1].value = self.variables[0].value + 1
self.output = self.variables[1]
})
push(constraints, c)
push(vars[i].constraints, c)
}
return {vars: vars, constraints: constraints}
}
// Build a projection: pairs of variables with scaling constraints
function build_projection(n) {
var src = []
var dst = []
var constraints = []
var i = 0
for (i = 0; i < n; i++) {
push(src, make_variable(`src${i}`, i * 10))
push(dst, make_variable(`dst${i}`, 0))
}
var scale_c = null
for (i = 0; i < n; i++) {
scale_c = make_constraint(STRONG, [src[i], dst[i]], function(self) {
self.variables[1].value = self.variables[0].value * 2 + 1
self.output = self.variables[1]
})
push(constraints, scale_c)
push(dst[i].constraints, scale_c)
}
return {src: src, dst: dst, constraints: constraints}
}
// Edit constraint: change a source, re-propagate
function run_edits(system, edits) {
var i = 0
var total_passes = 0
for (i = 0; i < edits; i++) {
system.vars[0].value = i
total_passes += propagate(system.vars, system.constraints)
}
return total_passes
}
return {
// Chain of 100 variables, propagate
chain_100: function(n) {
var i = 0
var chain = null
var x = 0
for (i = 0; i < n; i++) {
chain = build_chain(100)
x += propagate(chain.vars, chain.constraints)
}
return x
},
// Chain of 500 variables, propagate
chain_500: function(n) {
var i = 0
var chain = null
var x = 0
for (i = 0; i < n; i++) {
chain = build_chain(500)
x += propagate(chain.vars, chain.constraints)
}
return x
},
// Projection of 100 pairs
projection_100: function(n) {
var i = 0
var proj = null
var x = 0
for (i = 0; i < n; i++) {
proj = build_projection(100)
x += propagate(proj.src, proj.constraints)
}
return x
},
// Edit and re-propagate (incremental update)
chain_edit_100: function(n) {
var chain = build_chain(100)
var i = 0
var x = 0
for (i = 0; i < n; i++) {
chain.vars[0].value = i
x += propagate(chain.vars, chain.constraints)
}
return x
}
}

405
benches/encoders.cm Normal file
View File

@@ -0,0 +1,405 @@
// encoders.cm — nota/wota/json encode+decode benchmark
// Isolates per-type bottlenecks across all three serializers.
var nota = use('internal/nota')
var wota = use('internal/wota')
var json = use('json')
// --- Test data shapes ---
// Small integers: fast path for all encoders
var integers_small = array(100, function(i) { return i + 1 })
// Floats: stresses nota's snprintf path
var floats_array = array(100, function(i) {
return 3.14159 * (i + 1) + 0.00001 * i
})
// Short strings in records: per-string overhead, property enumeration
var strings_short = array(50, function(i) {
var r = {}
r[`k${i}`] = `value_${i}`
return r
})
// Single long string: throughput test (wota byte loop, nota kim)
var long_str = ""
var li = 0
for (li = 0; li < 100; li++) {
long_str = `${long_str}abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMN`
}
var strings_long = long_str
// Unicode text: nota's kim encoding, wota's byte packing
var strings_unicode = "こんにちは世界 🌍🌎🌏 Ñoño café résumé naïve Ω∑∏ 你好世界"
// Nested records: cycle detection, property enumeration
function make_nested(depth, breadth) {
var obj = {}
var i = 0
var k = null
if (depth <= 0) {
for (i = 0; i < breadth; i++) {
k = `v${i}`
obj[k] = i * 2.5
}
return obj
}
for (i = 0; i < breadth; i++) {
k = `n${i}`
obj[k] = make_nested(depth - 1, breadth)
}
return obj
}
var nested_records = make_nested(3, 4)
// Flat record: property enumeration cost
var flat_record = {}
var fi = 0
for (fi = 0; fi < 50; fi++) {
flat_record[`prop_${fi}`] = fi * 1.1
}
// Mixed payload: realistic workload
var mixed_payload = array(50, function(i) {
var r = {}
r.id = i
r.name = `item_${i}`
r.active = i % 2 == 0
r.score = i * 3.14
r.tags = [`t${i % 5}`, `t${(i + 1) % 5}`]
return r
})
// --- Pre-encode for decode benchmarks ---
var nota_enc_integers = nota.encode(integers_small)
var nota_enc_floats = nota.encode(floats_array)
var nota_enc_strings_short = nota.encode(strings_short)
var nota_enc_strings_long = nota.encode(strings_long)
var nota_enc_strings_unicode = nota.encode(strings_unicode)
var nota_enc_nested = nota.encode(nested_records)
var nota_enc_flat = nota.encode(flat_record)
var nota_enc_mixed = nota.encode(mixed_payload)
var wota_enc_integers = wota.encode(integers_small)
var wota_enc_floats = wota.encode(floats_array)
var wota_enc_strings_short = wota.encode(strings_short)
var wota_enc_strings_long = wota.encode(strings_long)
var wota_enc_strings_unicode = wota.encode(strings_unicode)
var wota_enc_nested = wota.encode(nested_records)
var wota_enc_flat = wota.encode(flat_record)
var wota_enc_mixed = wota.encode(mixed_payload)
var json_enc_integers = json.encode(integers_small)
var json_enc_floats = json.encode(floats_array)
var json_enc_strings_short = json.encode(strings_short)
var json_enc_strings_long = json.encode(strings_long)
var json_enc_strings_unicode = json.encode(strings_unicode)
var json_enc_nested = json.encode(nested_records)
var json_enc_flat = json.encode(flat_record)
var json_enc_mixed = json.encode(mixed_payload)
// --- Benchmark functions ---
return {
// NOTA encode
nota_encode_integers: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(integers_small) }
return r
},
nota_encode_floats: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(floats_array) }
return r
},
nota_encode_strings_short: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(strings_short) }
return r
},
nota_encode_strings_long: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(strings_long) }
return r
},
nota_encode_strings_unicode: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(strings_unicode) }
return r
},
nota_encode_nested: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(nested_records) }
return r
},
nota_encode_flat: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(flat_record) }
return r
},
nota_encode_mixed: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.encode(mixed_payload) }
return r
},
// NOTA decode
nota_decode_integers: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_integers) }
return r
},
nota_decode_floats: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_floats) }
return r
},
nota_decode_strings_short: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_strings_short) }
return r
},
nota_decode_strings_long: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_strings_long) }
return r
},
nota_decode_strings_unicode: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_strings_unicode) }
return r
},
nota_decode_nested: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_nested) }
return r
},
nota_decode_flat: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_flat) }
return r
},
nota_decode_mixed: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = nota.decode(nota_enc_mixed) }
return r
},
// WOTA encode
wota_encode_integers: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(integers_small) }
return r
},
wota_encode_floats: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(floats_array) }
return r
},
wota_encode_strings_short: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(strings_short) }
return r
},
wota_encode_strings_long: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(strings_long) }
return r
},
wota_encode_strings_unicode: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(strings_unicode) }
return r
},
wota_encode_nested: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(nested_records) }
return r
},
wota_encode_flat: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(flat_record) }
return r
},
wota_encode_mixed: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.encode(mixed_payload) }
return r
},
// WOTA decode
wota_decode_integers: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_integers) }
return r
},
wota_decode_floats: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_floats) }
return r
},
wota_decode_strings_short: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_strings_short) }
return r
},
wota_decode_strings_long: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_strings_long) }
return r
},
wota_decode_strings_unicode: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_strings_unicode) }
return r
},
wota_decode_nested: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_nested) }
return r
},
wota_decode_flat: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_flat) }
return r
},
wota_decode_mixed: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = wota.decode(wota_enc_mixed) }
return r
},
// JSON encode
json_encode_integers: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(integers_small) }
return r
},
json_encode_floats: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(floats_array) }
return r
},
json_encode_strings_short: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(strings_short) }
return r
},
json_encode_strings_long: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(strings_long) }
return r
},
json_encode_strings_unicode: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(strings_unicode) }
return r
},
json_encode_nested: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(nested_records) }
return r
},
json_encode_flat: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(flat_record) }
return r
},
json_encode_mixed: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.encode(mixed_payload) }
return r
},
// JSON decode
json_decode_integers: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_integers) }
return r
},
json_decode_floats: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_floats) }
return r
},
json_decode_strings_short: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_strings_short) }
return r
},
json_decode_strings_long: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_strings_long) }
return r
},
json_decode_strings_unicode: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_strings_unicode) }
return r
},
json_decode_nested: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_nested) }
return r
},
json_decode_flat: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_flat) }
return r
},
json_decode_mixed: function(n) {
var i = 0
var r = null
for (i = 0; i < n; i++) { r = json.decode(json_enc_mixed) }
return r
}
}

126
benches/fibonacci.cm Normal file
View File

@@ -0,0 +1,126 @@
// fibonacci.cm — Fibonacci variants kernel
// Tests recursion overhead, memoization patterns, iteration vs recursion.
// Naive recursive (exponential) — measures call overhead
function fib_naive(n) {
if (n <= 1) return n
return fib_naive(n - 1) + fib_naive(n - 2)
}
// Iterative (linear)
function fib_iter(n) {
var a = 0
var b = 1
var i = 0
var tmp = 0
for (i = 0; i < n; i++) {
tmp = a + b
a = b
b = tmp
}
return a
}
// Memoized recursive (tests object property lookup + recursion)
function make_memo_fib() {
var cache = {}
var fib = function(n) {
var key = text(n)
if (cache[key]) return cache[key]
var result = null
if (n <= 1) {
result = n
} else {
result = fib(n - 1) + fib(n - 2)
}
cache[key] = result
return result
}
return fib
}
// CPS (continuation passing style) — tests closure creation
function fib_cps(n, cont) {
if (n <= 1) return cont(n)
return fib_cps(n - 1, function(a) {
return fib_cps(n - 2, function(b) {
return cont(a + b)
})
})
}
// Matrix exponentiation style (accumulator)
function fib_matrix(n) {
var a = 1
var b = 0
var c = 0
var d = 1
var ta = 0
var tb = 0
var m = n
while (m > 0) {
if (m % 2 == 1) {
ta = a * d + b * c // wrong but stresses numeric ops
tb = b * d + a * c
a = ta
b = tb
}
ta = c * c + d * d
tb = d * (2 * c + d)
c = ta
d = tb
m = floor(m / 2)
}
return b
}
return {
fib_naive_25: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) x += fib_naive(25)
return x
},
fib_naive_30: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) x += fib_naive(30)
return x
},
fib_iter_80: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) x += fib_iter(80)
return x
},
fib_memo_100: function(n) {
var i = 0
var x = 0
var fib = null
for (i = 0; i < n; i++) {
fib = make_memo_fib()
x += fib(100)
}
return x
},
fib_cps_20: function(n) {
var i = 0
var x = 0
var identity = function(v) { return v }
for (i = 0; i < n; i++) {
x += fib_cps(20, identity)
}
return x
},
fib_matrix_80: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) x += fib_matrix(80)
return x
}
}

159
benches/hash_workload.cm Normal file
View File

@@ -0,0 +1,159 @@
// hash_workload.cm — Hash-heavy / word-count / map-reduce kernel
// Stresses record (object) creation, property access, and string handling.
function make_words(count) {
// Generate a repeating word list to simulate text processing
var base_words = [
"the", "quick", "brown", "fox", "jumps", "over", "lazy", "dog",
"and", "cat", "sat", "on", "mat", "with", "hat", "bat",
"alpha", "beta", "gamma", "delta", "epsilon", "zeta", "eta", "theta",
"hello", "world", "foo", "bar", "baz", "qux", "quux", "corge"
]
var words = []
var i = 0
for (i = 0; i < count; i++) {
push(words, base_words[i % length(base_words)])
}
return words
}
// Word frequency count
function word_count(words) {
var freq = {}
var i = 0
var w = null
for (i = 0; i < length(words); i++) {
w = words[i]
if (freq[w]) {
freq[w] = freq[w] + 1
} else {
freq[w] = 1
}
}
return freq
}
// Find top-N words by frequency
function top_n(freq, n) {
var keys = array(freq)
var pairs = []
var i = 0
for (i = 0; i < length(keys); i++) {
push(pairs, {word: keys[i], count: freq[keys[i]]})
}
var sorted = sort(pairs, "count")
// Return last N (highest counts)
var result = []
var start = length(sorted) - n
if (start < 0) start = 0
for (i = start; i < length(sorted); i++) {
push(result, sorted[i])
}
return result
}
// Histogram: group words by length
function group_by_length(words) {
var groups = {}
var i = 0
var w = null
var k = null
for (i = 0; i < length(words); i++) {
w = words[i]
k = text(length(w))
if (!groups[k]) groups[k] = []
push(groups[k], w)
}
return groups
}
// Simple hash table with chaining (stress property access patterns)
function hash_table_ops(n) {
var table = {}
var i = 0
var k = null
var collisions = 0
// Insert phase
for (i = 0; i < n; i++) {
k = `key_${i % 512}`
if (table[k]) collisions++
table[k] = i
}
// Lookup phase
var found = 0
for (i = 0; i < n; i++) {
k = `key_${i % 512}`
if (table[k]) found++
}
// Delete phase
var deleted = 0
for (i = 0; i < n; i += 3) {
k = `key_${i % 512}`
if (table[k]) {
delete table[k]
deleted++
}
}
return found - deleted + collisions
}
var words_1k = make_words(1000)
var words_10k = make_words(10000)
return {
// Word count on 1K words
wordcount_1k: function(n) {
var i = 0
var freq = null
for (i = 0; i < n; i++) {
freq = word_count(words_1k)
}
return freq
},
// Word count on 10K words
wordcount_10k: function(n) {
var i = 0
var freq = null
for (i = 0; i < n; i++) {
freq = word_count(words_10k)
}
return freq
},
// Word count + top-10 extraction
wordcount_top10: function(n) {
var i = 0
var freq = null
var top = null
for (i = 0; i < n; i++) {
freq = word_count(words_10k)
top = top_n(freq, 10)
}
return top
},
// Group words by length
group_by_len: function(n) {
var i = 0
var groups = null
for (i = 0; i < n; i++) {
groups = group_by_length(words_10k)
}
return groups
},
// Hash table insert/lookup/delete
hash_table: function(n) {
var i = 0
var x = 0
for (i = 0; i < n; i++) {
x += hash_table_ops(2048)
}
return x
}
}

167
benches/json_walk.cm Normal file
View File

@@ -0,0 +1,167 @@
// json_walk.cm — JSON parse + walk + serialize kernel
// Stresses strings, records, arrays, and recursive traversal.
var json = use('json')
function make_nested_object(depth, breadth) {
var obj = {}
var i = 0
var k = null
if (depth <= 0) {
for (i = 0; i < breadth; i++) {
k = `key_${i}`
obj[k] = i * 3.14
}
return obj
}
for (i = 0; i < breadth; i++) {
k = `node_${i}`
obj[k] = make_nested_object(depth - 1, breadth)
}
obj.value = depth
obj.name = `level_${depth}`
return obj
}
function make_array_data(size) {
var arr = []
var i = 0
for (i = 0; i < size; i++) {
push(arr, {
id: i,
name: `item_${i}`,
active: i % 2 == 0,
score: i * 1.5,
tags: [`tag_${i % 5}`, `tag_${(i + 1) % 5}`]
})
}
return arr
}
// Walk an object tree, counting nodes
function walk_count(obj) {
var count = 1
var keys = null
var i = 0
var v = null
if (is_object(obj)) {
keys = array(obj)
for (i = 0; i < length(keys); i++) {
v = obj[keys[i]]
if (is_object(v) || is_array(v)) {
count += walk_count(v)
}
}
} else if (is_array(obj)) {
for (i = 0; i < length(obj); i++) {
v = obj[i]
if (is_object(v) || is_array(v)) {
count += walk_count(v)
}
}
}
return count
}
// Walk and extract all numbers
function walk_sum(obj) {
var sum = 0
var keys = null
var i = 0
var v = null
if (is_object(obj)) {
keys = array(obj)
for (i = 0; i < length(keys); i++) {
v = obj[keys[i]]
if (is_number(v)) {
sum += v
} else if (is_object(v) || is_array(v)) {
sum += walk_sum(v)
}
}
} else if (is_array(obj)) {
for (i = 0; i < length(obj); i++) {
v = obj[i]
if (is_number(v)) {
sum += v
} else if (is_object(v) || is_array(v)) {
sum += walk_sum(v)
}
}
}
return sum
}
// Pre-build test data strings
var nested_obj = make_nested_object(3, 4)
var nested_json = json.encode(nested_obj)
var array_data = make_array_data(200)
var array_json = json.encode(array_data)
return {
// Parse nested JSON
json_parse_nested: function(n) {
var i = 0
var obj = null
for (i = 0; i < n; i++) {
obj = json.decode(nested_json)
}
return obj
},
// Parse array-of-records JSON
json_parse_array: function(n) {
var i = 0
var arr = null
for (i = 0; i < n; i++) {
arr = json.decode(array_json)
}
return arr
},
// Encode nested object to JSON
json_encode_nested: function(n) {
var i = 0
var s = null
for (i = 0; i < n; i++) {
s = json.encode(nested_obj)
}
return s
},
// Encode array to JSON
json_encode_array: function(n) {
var i = 0
var s = null
for (i = 0; i < n; i++) {
s = json.encode(array_data)
}
return s
},
// Parse + walk + count
json_roundtrip_walk: function(n) {
var i = 0
var obj = null
var count = 0
for (i = 0; i < n; i++) {
obj = json.decode(nested_json)
count += walk_count(obj)
}
return count
},
// Parse + sum all numbers + re-encode
json_roundtrip_full: function(n) {
var i = 0
var obj = null
var sum = 0
var out = null
for (i = 0; i < n; i++) {
obj = json.decode(array_json)
sum += walk_sum(obj)
out = json.encode(obj)
}
return sum
}
}

View File

@@ -1,24 +1,24 @@
// micro_ops.bench.ce (or .cm depending on your convention)
// micro_ops.cm — microbenchmarks for core operations
// Note: We use a function-local sink in each benchmark to avoid cross-contamination
function blackhole(sink, x) {
// Prevent dead-code elimination
return (sink + (x | 0)) | 0
}
function make_obj_xy(x, y) {
return { x, y }
return {x: x, y: y}
}
function make_obj_yx(x, y) {
// Different insertion order to force a different shape in many engines
return { y, x }
// Different insertion order to force a different shape
return {y: y, x: x}
}
function make_shapes(n) {
var out = []
for (var i = 0; i < n; i++) {
var o = { a: i }
var i = 0
var o = null
for (i = 0; i < n; i++) {
o = {a: i}
o[`p${i}`] = i
push(out, o)
}
@@ -27,13 +27,15 @@ function make_shapes(n) {
function make_packed_array(n) {
var a = []
for (var i = 0; i < n; i++) push(a, i)
var i = 0
for (i = 0; i < n; i++) push(a, i)
return a
}
function make_holey_array(n) {
var a = []
for (var i = 0; i < n; i += 2) a[i] = i
var i = 0
for (i = 0; i < n; i += 2) a[i] = i
return a
}
@@ -41,7 +43,8 @@ return {
// 0) Baseline loop cost
loop_empty: function(n) {
var sink = 0
for (var i = 0; i < n; i++) {}
var i = 0
for (i = 0; i < n; i++) {}
return blackhole(sink, n)
},
@@ -49,35 +52,40 @@ return {
i32_add: function(n) {
var sink = 0
var x = 1
for (var i = 0; i < n; i++) x = (x + 3) | 0
var i = 0
for (i = 0; i < n; i++) x = (x + 3) | 0
return blackhole(sink, x)
},
f64_add: function(n) {
var sink = 0
var x = 1.0
for (var i = 0; i < n; i++) x = x + 3.14159
var i = 0
for (i = 0; i < n; i++) x = x + 3.14159
return blackhole(sink, x | 0)
},
mixed_add: function(n) {
var sink = 0
var x = 1
for (var i = 0; i < n; i++) x = x + 0.25
var i = 0
for (i = 0; i < n; i++) x = x + 0.25
return blackhole(sink, x | 0)
},
bit_ops: function(n) {
var sink = 0
var x = 0x12345678
for (var i = 0; i < n; i++) x = ((x << 5) ^ (x >>> 3)) | 0
var i = 0
for (i = 0; i < n; i++) x = ((x << 5) ^ (x >>> 3)) | 0
return blackhole(sink, x)
},
overflow_path: function(n) {
var sink = 0
var x = 0x70000000
for (var i = 0; i < n; i++) x = (x + 0x10000000) | 0
var i = 0
for (i = 0; i < n; i++) x = (x + 0x10000000) | 0
return blackhole(sink, x)
},
@@ -85,7 +93,8 @@ return {
branch_predictable: function(n) {
var sink = 0
var x = 0
for (var i = 0; i < n; i++) {
var i = 0
for (i = 0; i < n; i++) {
if ((i & 7) != 0) x++
else x += 2
}
@@ -95,7 +104,8 @@ return {
branch_alternating: function(n) {
var sink = 0
var x = 0
for (var i = 0; i < n; i++) {
var i = 0
for (i = 0; i < n; i++) {
if ((i & 1) == 0) x++
else x += 2
}
@@ -105,29 +115,47 @@ return {
// 3) Calls
call_direct: function(n) {
var sink = 0
function f(a) { return (a + 1) | 0 }
var f = function(a) { return (a + 1) | 0 }
var x = 0
for (var i = 0; i < n; i++) x = f(x)
var i = 0
for (i = 0; i < n; i++) x = f(x)
return blackhole(sink, x)
},
call_indirect: function(n) {
var sink = 0
function f(a) { return (a + 1) | 0 }
var f = function(a) { return (a + 1) | 0 }
var g = f
var x = 0
for (var i = 0; i < n; i++) x = g(x)
var i = 0
for (i = 0; i < n; i++) x = g(x)
return blackhole(sink, x)
},
call_closure: function(n) {
var sink = 0
function make_adder(k) {
var make_adder = function(k) {
return function(a) { return (a + k) | 0 }
}
var add3 = make_adder(3)
var x = 0
for (var i = 0; i < n; i++) x = add3(x)
var i = 0
for (i = 0; i < n; i++) x = add3(x)
return blackhole(sink, x)
},
call_multi_arity: function(n) {
var sink = 0
var f0 = function() { return 1 }
var f1 = function(a) { return a + 1 }
var f2 = function(a, b) { return a + b }
var f3 = function(a, b, c) { return a + b + c }
var f4 = function(a, b, c, d) { return a + b + c + d }
var x = 0
var i = 0
for (i = 0; i < n; i++) {
x = (x + f0() + f1(i) + f2(i, 1) + f3(i, 1, 2) + f4(i, 1, 2, 3)) | 0
}
return blackhole(sink, x)
},
@@ -136,7 +164,8 @@ return {
var sink = 0
var o = make_obj_xy(1, 2)
var x = 0
for (var i = 0; i < n; i++) x = (x + o.x) | 0
var i = 0
for (i = 0; i < n; i++) x = (x + o.x) | 0
return blackhole(sink, x)
},
@@ -145,20 +174,38 @@ return {
var a = make_obj_xy(1, 2)
var b = make_obj_yx(1, 2)
var x = 0
for (var i = 0; i < n; i++) {
var o = (i & 1) == 0 ? a : b
var i = 0
var o = null
for (i = 0; i < n; i++) {
o = (i & 1) == 0 ? a : b
x = (x + o.x) | 0
}
return blackhole(sink, x)
},
prop_read_poly_4: function(n) {
var sink = 0
var shapes = [
{x: 1, y: 2},
{y: 2, x: 1},
{x: 1, z: 3, y: 2},
{w: 0, x: 1, y: 2}
]
var x = 0
var i = 0
for (i = 0; i < n; i++) {
x = (x + shapes[i & 3].x) | 0
}
return blackhole(sink, x)
},
prop_read_mega: function(n) {
var sink = 0
var objs = make_shapes(32)
var x = 0
for (var i = 0; i < n; i++) {
var o = objs[i & 31]
x = (x + o.a) | 0
var i = 0
for (i = 0; i < n; i++) {
x = (x + objs[i & 31].a) | 0
}
return blackhole(sink, x)
},
@@ -166,7 +213,8 @@ return {
prop_write_mono: function(n) {
var sink = 0
var o = make_obj_xy(1, 2)
for (var i = 0; i < n; i++) o.x = (o.x + 1) | 0
var i = 0
for (i = 0; i < n; i++) o.x = (o.x + 1) | 0
return blackhole(sink, o.x)
},
@@ -175,14 +223,16 @@ return {
var sink = 0
var a = make_packed_array(1024)
var x = 0
for (var i = 0; i < n; i++) x = (x + a[i & 1023]) | 0
var i = 0
for (i = 0; i < n; i++) x = (x + a[i & 1023]) | 0
return blackhole(sink, x)
},
array_write_packed: function(n) {
var sink = 0
var a = make_packed_array(1024)
for (var i = 0; i < n; i++) a[i & 1023] = i
var i = 0
for (i = 0; i < n; i++) a[i & 1023] = i
return blackhole(sink, a[17] | 0)
},
@@ -190,9 +240,10 @@ return {
var sink = 0
var a = make_holey_array(2048)
var x = 0
for (var i = 0; i < n; i++) {
var v = a[(i & 2047)]
// If "missing" is a special value in your language, this stresses that path too
var i = 0
var v = null
for (i = 0; i < n; i++) {
v = a[(i & 2047)]
if (v) x = (x + v) | 0
}
return blackhole(sink, x)
@@ -201,21 +252,97 @@ return {
array_push_steady: function(n) {
var sink = 0
var x = 0
for (var j = 0; j < n; j++) {
var a = []
for (var i = 0; i < 256; i++) push(a, i)
var j = 0
var i = 0
var a = null
for (j = 0; j < n; j++) {
a = []
for (i = 0; i < 256; i++) push(a, i)
x = (x + length(a)) | 0
}
return blackhole(sink, x)
},
array_push_pop: function(n) {
var sink = 0
var a = []
var x = 0
var i = 0
var v = 0
for (i = 0; i < n; i++) {
push(a, i)
if (length(a) > 64) {
v = pop(a)
x = (x + v) | 0
}
}
return blackhole(sink, x)
},
array_indexed_sum: function(n) {
var sink = 0
var a = make_packed_array(1024)
var x = 0
var j = 0
var i = 0
for (j = 0; j < n; j++) {
x = 0
for (i = 0; i < 1024; i++) {
x = (x + a[i]) | 0
}
}
return blackhole(sink, x)
},
// 6) Strings
string_concat_small: function(n) {
var sink = 0
var x = 0
for (var j = 0; j < n; j++) {
var s = ""
for (var i = 0; i < 16; i++) s = s + "x"
var j = 0
var i = 0
var s = null
for (j = 0; j < n; j++) {
s = ""
for (i = 0; i < 16; i++) s = s + "x"
x = (x + length(s)) | 0
}
return blackhole(sink, x)
},
string_concat_medium: function(n) {
var sink = 0
var x = 0
var j = 0
var i = 0
var s = null
for (j = 0; j < n; j++) {
s = ""
for (i = 0; i < 100; i++) s = s + "abcdefghij"
x = (x + length(s)) | 0
}
return blackhole(sink, x)
},
string_interpolation: function(n) {
var sink = 0
var x = 0
var i = 0
var s = null
for (i = 0; i < n; i++) {
s = `item_${i}_value_${i * 2}`
x = (x + length(s)) | 0
}
return blackhole(sink, x)
},
string_slice: function(n) {
var sink = 0
var base = "the quick brown fox jumps over the lazy dog"
var x = 0
var i = 0
var s = null
for (i = 0; i < n; i++) {
s = text(base, i % 10, i % 10 + 10)
x = (x + length(s)) | 0
}
return blackhole(sink, x)
@@ -225,8 +352,10 @@ return {
alloc_tiny_objects: function(n) {
var sink = 0
var x = 0
for (var i = 0; i < n; i++) {
var o = { a: i, b: i + 1, c: i + 2 }
var i = 0
var o = null
for (i = 0; i < n; i++) {
o = {a: i, b: i + 1, c: i + 2}
x = (x + o.b) | 0
}
return blackhole(sink, x)
@@ -235,9 +364,12 @@ return {
alloc_linked_list: function(n) {
var sink = 0
var head = null
for (var i = 0; i < n; i++) head = { v: i, next: head }
var i = 0
var x = 0
var p = head
var p = null
for (i = 0; i < n; i++) head = {v: i, next: head}
x = 0
p = head
while (p) {
x = (x + p.v) | 0
p = p.next
@@ -245,18 +377,118 @@ return {
return blackhole(sink, x)
},
// 8) meme-specific (adapt these to your exact semantics)
meme_clone_read: function(n) {
// If meme(obj) clones like Object.create / prototypal clone, this hits it hard.
// Replace with your exact meme call form.
alloc_arrays: function(n) {
var sink = 0
var base = { x: 1, y: 2 }
var x = 0
for (var i = 0; i < n; i++) {
var o = meme(base)
var i = 0
var a = null
for (i = 0; i < n; i++) {
a = [i, i + 1, i + 2, i + 3]
x = (x + a[2]) | 0
}
return blackhole(sink, x)
},
alloc_short_lived: function(n) {
var sink = 0
var x = 0
var i = 0
var o = null
// Allocate objects that immediately become garbage
for (i = 0; i < n; i++) {
o = {val: i, data: {inner: i + 1}}
x = (x + o.data.inner) | 0
}
return blackhole(sink, x)
},
alloc_long_lived_pressure: function(n) {
var sink = 0
var store = []
var x = 0
var i = 0
var o = null
// Keep first 1024 objects alive, churn the rest
for (i = 0; i < n; i++) {
o = {val: i, data: i * 2}
if (i < 1024) {
push(store, o)
}
x = (x + o.data) | 0
}
return blackhole(sink, x)
},
// 8) Meme (prototype clone)
meme_clone_read: function(n) {
var sink = 0
var base = {x: 1, y: 2}
var x = 0
var i = 0
var o = null
for (i = 0; i < n; i++) {
o = meme(base)
x = (x + o.x) | 0
}
return blackhole(sink, x)
},
// 9) Guard / type check paths
guard_hot_number: function(n) {
// Monomorphic number path — guards should hoist
var sink = 0
var x = 1
var i = 0
for (i = 0; i < n; i++) x = x + 1
return blackhole(sink, x | 0)
},
guard_mixed_types: function(n) {
// Alternating number/text — guards must stay
var sink = 0
var vals = [1, "a", 2, "b", 3, "c", 4, "d"]
var x = 0
var i = 0
for (i = 0; i < n; i++) {
if (is_number(vals[i & 7])) x = (x + vals[i & 7]) | 0
}
return blackhole(sink, x)
},
// 10) Reduce / higher-order
reduce_sum: function(n) {
var sink = 0
var a = make_packed_array(256)
var x = 0
var i = 0
for (i = 0; i < n; i++) {
x = (x + reduce(a, function(acc, v) { return acc + v }, 0)) | 0
}
return blackhole(sink, x)
},
filter_evens: function(n) {
var sink = 0
var a = make_packed_array(256)
var x = 0
var i = 0
for (i = 0; i < n; i++) {
x = (x + length(filter(a, function(v) { return v % 2 == 0 }))) | 0
}
return blackhole(sink, x)
},
arrfor_sum: function(n) {
var sink = 0
var a = make_packed_array(256)
var x = 0
var i = 0
var sum = 0
for (i = 0; i < n; i++) {
sum = 0
arrfor(a, function(v) { sum += v })
x = (x + sum) | 0
}
return blackhole(sink, x)
}
}

249
benches/module_load.cm Normal file
View File

@@ -0,0 +1,249 @@
// module_load.cm — Module loading simulation (macro benchmark)
// Simulates parsing many small modules, linking, and running.
// Tests the "build scenario" pattern.
var json = use('json')
// Simulate a small module: parse token stream + build AST + evaluate
function tokenize(src) {
var tokens = []
var i = 0
var ch = null
var chars = array(src)
var buf = ""
for (i = 0; i < length(chars); i++) {
ch = chars[i]
if (ch == " " || ch == "\n" || ch == "\t") {
if (length(buf) > 0) {
push(tokens, buf)
buf = ""
}
} else if (ch == "(" || ch == ")" || ch == "+" || ch == "-"
|| ch == "*" || ch == "=" || ch == ";" || ch == ",") {
if (length(buf) > 0) {
push(tokens, buf)
buf = ""
}
push(tokens, ch)
} else {
buf = buf + ch
}
}
if (length(buf) > 0) push(tokens, buf)
return tokens
}
// Build a simple AST from tokens
function parse_tokens(tokens) {
var ast = []
var i = 0
var tok = null
var node = null
for (i = 0; i < length(tokens); i++) {
tok = tokens[i]
if (tok == "var" || tok == "def") {
node = {type: "decl", kind: tok, name: null, value: null}
i++
if (i < length(tokens)) node.name = tokens[i]
i++ // skip =
i++
if (i < length(tokens)) node.value = tokens[i]
push(ast, node)
} else if (tok == "return") {
node = {type: "return", value: null}
i++
if (i < length(tokens)) node.value = tokens[i]
push(ast, node)
} else if (tok == "function") {
node = {type: "func", name: null, body: []}
i++
if (i < length(tokens)) node.name = tokens[i]
// Skip to matching )
while (i < length(tokens) && tokens[i] != ")") i++
push(ast, node)
} else {
push(ast, {type: "expr", value: tok})
}
}
return ast
}
// Evaluate: simple symbol table + resolution
function evaluate(ast, env) {
var result = null
var i = 0
var node = null
for (i = 0; i < length(ast); i++) {
node = ast[i]
if (node.type == "decl") {
env[node.name] = node.value
} else if (node.type == "return") {
result = node.value
if (env[result]) result = env[result]
} else if (node.type == "func") {
env[node.name] = node
}
}
return result
}
// Generate fake module source code
function generate_module(id, dep_count) {
var src = ""
var i = 0
src = src + "var _id = " + text(id) + ";\n"
for (i = 0; i < dep_count; i++) {
src = src + "var dep" + text(i) + " = use(mod_" + text(i) + ");\n"
}
src = src + "var x = " + text(id * 17) + ";\n"
src = src + "var y = " + text(id * 31) + ";\n"
src = src + "function compute(a, b) { return a + b; }\n"
src = src + "var result = compute(x, y);\n"
src = src + "return result;\n"
return src
}
// Simulate loading N modules with dependency chains
function simulate_build(n_modules, deps_per_module) {
var modules = []
var loaded = {}
var i = 0
var j = 0
var src = null
var tokens = null
var ast = null
var env = null
var result = null
var total_tokens = 0
var total_nodes = 0
// Generate all module sources
for (i = 0; i < n_modules; i++) {
src = generate_module(i, deps_per_module)
push(modules, src)
}
// "Load" each module: tokenize → parse → evaluate
for (i = 0; i < n_modules; i++) {
tokens = tokenize(modules[i])
total_tokens += length(tokens)
ast = parse_tokens(tokens)
total_nodes += length(ast)
env = {}
// Resolve dependencies
for (j = 0; j < deps_per_module; j++) {
if (j < i) {
env["dep" + text(j)] = loaded["mod_" + text(j)]
}
}
result = evaluate(ast, env)
loaded["mod_" + text(i)] = result
}
return {
modules: n_modules,
total_tokens: total_tokens,
total_nodes: total_nodes,
last_result: result
}
}
// Dependency graph analysis (topological sort simulation)
function topo_sort(n_modules, deps_per_module) {
// Build adjacency list
var adj = {}
var in_degree = {}
var i = 0
var j = 0
var name = null
var dep = null
for (i = 0; i < n_modules; i++) {
name = "mod_" + text(i)
adj[name] = []
in_degree[name] = 0
}
for (i = 0; i < n_modules; i++) {
name = "mod_" + text(i)
for (j = 0; j < deps_per_module; j++) {
if (j < i) {
dep = "mod_" + text(j)
push(adj[dep], name)
in_degree[name] = in_degree[name] + 1
}
}
}
// Kahn's algorithm
var queue = []
var keys = array(in_degree)
for (i = 0; i < length(keys); i++) {
if (in_degree[keys[i]] == 0) push(queue, keys[i])
}
var order = []
var current = null
var neighbors = null
var qi = 0
while (qi < length(queue)) {
current = queue[qi]
qi++
push(order, current)
neighbors = adj[current]
if (neighbors) {
for (i = 0; i < length(neighbors); i++) {
in_degree[neighbors[i]] = in_degree[neighbors[i]] - 1
if (in_degree[neighbors[i]] == 0) push(queue, neighbors[i])
}
}
}
return order
}
return {
// Small build: 50 modules, 3 deps each
build_50: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = simulate_build(50, 3)
}
return result
},
// Medium build: 200 modules, 5 deps each
build_200: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = simulate_build(200, 5)
}
return result
},
// Large build: 500 modules, 5 deps each
build_500: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = simulate_build(500, 5)
}
return result
},
// Topo sort of 500 module dependency graph
topo_sort_500: function(n) {
var i = 0
var order = null
for (i = 0; i < n; i++) {
order = topo_sort(500, 5)
}
return order
}
}

160
benches/nbody.cm Normal file
View File

@@ -0,0 +1,160 @@
// nbody.cm — N-body gravitational simulation kernel
// Pure numeric + allocation workload. Classic VM benchmark.
var math = use('math/radians')
def PI = 3.141592653589793
def SOLAR_MASS = 4 * PI * PI
def DAYS_PER_YEAR = 365.24
function make_system() {
// Sun + 4 Jovian planets
var sun = {x: 0, y: 0, z: 0, vx: 0, vy: 0, vz: 0, mass: SOLAR_MASS}
var jupiter = {
x: 4.84143144246472090,
y: -1.16032004402742839,
z: -0.103622044471123109,
vx: 0.00166007664274403694 * DAYS_PER_YEAR,
vy: 0.00769901118419740425 * DAYS_PER_YEAR,
vz: -0.0000690460016972063023 * DAYS_PER_YEAR,
mass: 0.000954791938424326609 * SOLAR_MASS
}
var saturn = {
x: 8.34336671824457987,
y: 4.12479856412430479,
z: -0.403523417114321381,
vx: -0.00276742510726862411 * DAYS_PER_YEAR,
vy: 0.00499852801234917238 * DAYS_PER_YEAR,
vz: 0.0000230417297573763929 * DAYS_PER_YEAR,
mass: 0.000285885980666130812 * SOLAR_MASS
}
var uranus = {
x: 12.8943695621391310,
y: -15.1111514016986312,
z: -0.223307578892655734,
vx: 0.00296460137564761618 * DAYS_PER_YEAR,
vy: 0.00237847173959480950 * DAYS_PER_YEAR,
vz: -0.0000296589568540237556 * DAYS_PER_YEAR,
mass: 0.0000436624404335156298 * SOLAR_MASS
}
var neptune = {
x: 15.3796971148509165,
y: -25.9193146099879641,
z: 0.179258772950371181,
vx: 0.00268067772490389322 * DAYS_PER_YEAR,
vy: 0.00162824170038242295 * DAYS_PER_YEAR,
vz: -0.0000951592254519715870 * DAYS_PER_YEAR,
mass: 0.0000515138902046611451 * SOLAR_MASS
}
var bodies = [sun, jupiter, saturn, uranus, neptune]
// Offset momentum
var px = 0
var py = 0
var pz = 0
var i = 0
for (i = 0; i < length(bodies); i++) {
px += bodies[i].vx * bodies[i].mass
py += bodies[i].vy * bodies[i].mass
pz += bodies[i].vz * bodies[i].mass
}
sun.vx = -px / SOLAR_MASS
sun.vy = -py / SOLAR_MASS
sun.vz = -pz / SOLAR_MASS
return bodies
}
function advance(bodies, dt) {
var n = length(bodies)
var i = 0
var j = 0
var bi = null
var bj = null
var dx = 0
var dy = 0
var dz = 0
var dist_sq = 0
var dist = 0
var mag = 0
for (i = 0; i < n; i++) {
bi = bodies[i]
for (j = i + 1; j < n; j++) {
bj = bodies[j]
dx = bi.x - bj.x
dy = bi.y - bj.y
dz = bi.z - bj.z
dist_sq = dx * dx + dy * dy + dz * dz
dist = math.sqrt(dist_sq)
mag = dt / (dist_sq * dist)
bi.vx -= dx * bj.mass * mag
bi.vy -= dy * bj.mass * mag
bi.vz -= dz * bj.mass * mag
bj.vx += dx * bi.mass * mag
bj.vy += dy * bi.mass * mag
bj.vz += dz * bi.mass * mag
}
}
for (i = 0; i < n; i++) {
bi = bodies[i]
bi.x += dt * bi.vx
bi.y += dt * bi.vy
bi.z += dt * bi.vz
}
}
function energy(bodies) {
var e = 0
var n = length(bodies)
var i = 0
var j = 0
var bi = null
var bj = null
var dx = 0
var dy = 0
var dz = 0
for (i = 0; i < n; i++) {
bi = bodies[i]
e += 0.5 * bi.mass * (bi.vx * bi.vx + bi.vy * bi.vy + bi.vz * bi.vz)
for (j = i + 1; j < n; j++) {
bj = bodies[j]
dx = bi.x - bj.x
dy = bi.y - bj.y
dz = bi.z - bj.z
e -= (bi.mass * bj.mass) / math.sqrt(dx * dx + dy * dy + dz * dz)
}
}
return e
}
return {
nbody_1k: function(n) {
var i = 0
var j = 0
var bodies = null
for (i = 0; i < n; i++) {
bodies = make_system()
for (j = 0; j < 1000; j++) advance(bodies, 0.01)
energy(bodies)
}
},
nbody_10k: function(n) {
var i = 0
var j = 0
var bodies = null
for (i = 0; i < n; i++) {
bodies = make_system()
for (j = 0; j < 10000; j++) advance(bodies, 0.01)
energy(bodies)
}
}
}

154
benches/ray_tracer.cm Normal file
View File

@@ -0,0 +1,154 @@
// ray_tracer.cm — Simple ray tracer kernel
// Control flow + numeric + allocation. Classic VM benchmark.
var math = use('math/radians')
function vec(x, y, z) {
return {x: x, y: y, z: z}
}
function vadd(a, b) {
return {x: a.x + b.x, y: a.y + b.y, z: a.z + b.z}
}
function vsub(a, b) {
return {x: a.x - b.x, y: a.y - b.y, z: a.z - b.z}
}
function vmul(v, s) {
return {x: v.x * s, y: v.y * s, z: v.z * s}
}
function vdot(a, b) {
return a.x * b.x + a.y * b.y + a.z * b.z
}
function vnorm(v) {
var len = math.sqrt(vdot(v, v))
if (len == 0) return vec(0, 0, 0)
return vmul(v, 1 / len)
}
function make_sphere(center, radius, color) {
return {
center: center,
radius: radius,
color: color
}
}
function intersect_sphere(origin, dir, sphere) {
var oc = vsub(origin, sphere.center)
var b = vdot(oc, dir)
var c = vdot(oc, oc) - sphere.radius * sphere.radius
var disc = b * b - c
if (disc < 0) return -1
var sq = math.sqrt(disc)
var t1 = -b - sq
var t2 = -b + sq
if (t1 > 0.001) return t1
if (t2 > 0.001) return t2
return -1
}
function make_scene() {
var spheres = [
make_sphere(vec(0, -1, 5), 1, vec(1, 0, 0)),
make_sphere(vec(2, 0, 6), 1, vec(0, 1, 0)),
make_sphere(vec(-2, 0, 4), 1, vec(0, 0, 1)),
make_sphere(vec(0, 1, 4.5), 0.5, vec(1, 1, 0)),
make_sphere(vec(1, -0.5, 3), 0.3, vec(1, 0, 1)),
make_sphere(vec(0, -101, 5), 100, vec(0.5, 0.5, 0.5))
]
var light = vnorm(vec(1, 1, -1))
return {spheres: spheres, light: light}
}
function trace(origin, dir, scene) {
var closest_t = 999999
var closest_sphere = null
var i = 0
var t = 0
for (i = 0; i < length(scene.spheres); i++) {
t = intersect_sphere(origin, dir, scene.spheres[i])
if (t > 0 && t < closest_t) {
closest_t = t
closest_sphere = scene.spheres[i]
}
}
if (!closest_sphere) return vec(0.2, 0.3, 0.5) // sky color
var hit = vadd(origin, vmul(dir, closest_t))
var normal = vnorm(vsub(hit, closest_sphere.center))
var diffuse = vdot(normal, scene.light)
if (diffuse < 0) diffuse = 0
// Shadow check
var shadow_origin = vadd(hit, vmul(normal, 0.001))
var in_shadow = false
for (i = 0; i < length(scene.spheres); i++) {
if (scene.spheres[i] != closest_sphere) {
t = intersect_sphere(shadow_origin, scene.light, scene.spheres[i])
if (t > 0) {
in_shadow = true
break
}
}
}
var ambient = 0.15
var intensity = in_shadow ? ambient : ambient + diffuse * 0.85
return vmul(closest_sphere.color, intensity)
}
function render(width, height, scene) {
var aspect = width / height
var fov = 1.0
var total_r = 0
var total_g = 0
var total_b = 0
var y = 0
var x = 0
var u = 0
var v = 0
var dir = null
var color = null
var origin = vec(0, 0, 0)
for (y = 0; y < height; y++) {
for (x = 0; x < width; x++) {
u = (2 * (x + 0.5) / width - 1) * aspect * fov
v = (1 - 2 * (y + 0.5) / height) * fov
dir = vnorm(vec(u, v, 1))
color = trace(origin, dir, scene)
total_r += color.x
total_g += color.y
total_b += color.z
}
}
return {r: total_r, g: total_g, b: total_b}
}
var scene = make_scene()
return {
raytrace_32x32: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = render(32, 32, scene)
}
return result
},
raytrace_64x64: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = render(64, 64, scene)
}
return result
}
}

251
benches/richards.cm Normal file
View File

@@ -0,0 +1,251 @@
// richards.cm — Richards benchmark (scheduler simulation)
// Object-ish workload: dynamic dispatch, state machines, queuing.
def IDLE = 0
def WORKER = 1
def HANDLER_A = 2
def HANDLER_B = 3
def DEVICE_A = 4
def DEVICE_B = 5
def NUM_TASKS = 6
def TASK_RUNNING = 0
def TASK_WAITING = 1
def TASK_HELD = 2
def TASK_SUSPENDED = 3
function make_packet(link, id, kind) {
return {link: link, id: id, kind: kind, datum: 0, data: array(4, 0)}
}
function scheduler() {
var tasks = array(NUM_TASKS, null)
var current = null
var queue_count = 0
var hold_count = 0
var v1 = 0
var v2 = 0
var w_id = HANDLER_A
var w_datum = 0
var h_a_queue = null
var h_a_count = 0
var h_b_queue = null
var h_b_count = 0
var dev_a_pkt = null
var dev_b_pkt = null
var find_next = function() {
var best = null
var i = 0
for (i = 0; i < NUM_TASKS; i++) {
if (tasks[i] && tasks[i].state == TASK_RUNNING) {
if (!best || tasks[i].priority > best.priority) {
best = tasks[i]
}
}
}
return best
}
var hold_self = function() {
hold_count++
if (current) current.state = TASK_HELD
return find_next()
}
var release = function(id) {
var t = tasks[id]
if (!t) return find_next()
if (t.state == TASK_HELD) t.state = TASK_RUNNING
if (t.priority > (current ? current.priority : -1)) return t
return current
}
var queue_packet = function(pkt) {
var t = tasks[pkt.id]
var p = null
if (!t) return find_next()
queue_count++
pkt.link = null
pkt.id = current ? current.id : 0
if (!t.queue) {
t.queue = pkt
t.state = TASK_RUNNING
if (t.priority > (current ? current.priority : -1)) return t
} else {
p = t.queue
while (p.link) p = p.link
p.link = pkt
}
return current
}
// Idle task
tasks[IDLE] = {id: IDLE, priority: 0, queue: null, state: TASK_RUNNING,
hold_count: 0, queue_count: 0,
fn: function(pkt) {
v1--
if (v1 == 0) return hold_self()
if ((v2 & 1) == 0) {
v2 = v2 >> 1
return release(DEVICE_A)
}
v2 = (v2 >> 1) ^ 0xD008
return release(DEVICE_B)
}
}
// Worker task
tasks[WORKER] = {id: WORKER, priority: 1000, queue: null, state: TASK_SUSPENDED,
hold_count: 0, queue_count: 0,
fn: function(pkt) {
var i = 0
if (!pkt) return hold_self()
w_id = (w_id == HANDLER_A) ? HANDLER_B : HANDLER_A
pkt.id = w_id
pkt.datum = 0
for (i = 0; i < 4; i++) {
w_datum++
if (w_datum > 26) w_datum = 1
pkt.data[i] = 65 + w_datum
}
return queue_packet(pkt)
}
}
// Handler A
tasks[HANDLER_A] = {id: HANDLER_A, priority: 2000, queue: null, state: TASK_SUSPENDED,
hold_count: 0, queue_count: 0,
fn: function(pkt) {
var p = null
if (pkt) { h_a_queue = pkt; h_a_count++ }
if (h_a_queue) {
p = h_a_queue
h_a_queue = p.link
if (h_a_count < 3) return queue_packet(p)
return release(DEVICE_A)
}
return hold_self()
}
}
// Handler B
tasks[HANDLER_B] = {id: HANDLER_B, priority: 3000, queue: null, state: TASK_SUSPENDED,
hold_count: 0, queue_count: 0,
fn: function(pkt) {
var p = null
if (pkt) { h_b_queue = pkt; h_b_count++ }
if (h_b_queue) {
p = h_b_queue
h_b_queue = p.link
if (h_b_count < 3) return queue_packet(p)
return release(DEVICE_B)
}
return hold_self()
}
}
// Device A
tasks[DEVICE_A] = {id: DEVICE_A, priority: 4000, queue: null, state: TASK_SUSPENDED,
hold_count: 0, queue_count: 0,
fn: function(pkt) {
var p = null
if (pkt) { dev_a_pkt = pkt; return hold_self() }
if (dev_a_pkt) {
p = dev_a_pkt
dev_a_pkt = null
return queue_packet(p)
}
return hold_self()
}
}
// Device B
tasks[DEVICE_B] = {id: DEVICE_B, priority: 5000, queue: null, state: TASK_SUSPENDED,
hold_count: 0, queue_count: 0,
fn: function(pkt) {
var p = null
if (pkt) { dev_b_pkt = pkt; return hold_self() }
if (dev_b_pkt) {
p = dev_b_pkt
dev_b_pkt = null
return queue_packet(p)
}
return hold_self()
}
}
var run = function(iterations) {
var i = 0
var pkt1 = null
var pkt2 = null
var steps = 0
var pkt = null
var next = null
v1 = iterations
v2 = 0xBEEF
queue_count = 0
hold_count = 0
w_id = HANDLER_A
w_datum = 0
h_a_queue = null
h_a_count = 0
h_b_queue = null
h_b_count = 0
dev_a_pkt = null
dev_b_pkt = null
for (i = 0; i < NUM_TASKS; i++) {
if (tasks[i]) {
tasks[i].state = (i == IDLE) ? TASK_RUNNING : TASK_SUSPENDED
tasks[i].queue = null
}
}
pkt1 = make_packet(null, WORKER, 1)
pkt2 = make_packet(pkt1, WORKER, 1)
tasks[WORKER].queue = pkt2
tasks[WORKER].state = TASK_RUNNING
current = find_next()
while (current && steps < iterations * 10) {
pkt = current.queue
if (pkt) {
current.queue = pkt.link
current.queue_count++
}
next = current.fn(pkt)
if (next) current = next
else current = find_next()
steps++
}
return {queue_count: queue_count, hold_count: hold_count, steps: steps}
}
return {run: run}
}
return {
richards_100: function(n) {
var i = 0
var s = null
var result = null
for (i = 0; i < n; i++) {
s = scheduler()
result = s.run(100)
}
return result
},
richards_1k: function(n) {
var i = 0
var s = null
var result = null
for (i = 0; i < n; i++) {
s = scheduler()
result = s.run(1000)
}
return result
}
}

180
benches/sorting.cm Normal file
View File

@@ -0,0 +1,180 @@
// sorting.cm — Sorting and searching kernel
// Array manipulation, comparison-heavy, allocation patterns.
function make_random_array(n, seed) {
var a = []
var x = seed
var i = 0
for (i = 0; i < n; i++) {
x = ((x * 1103515245 + 12345) & 0x7FFFFFFF) | 0
push(a, x % 10000)
}
return a
}
function make_descending(n) {
var a = []
var i = 0
for (i = n - 1; i >= 0; i--) push(a, i)
return a
}
// Manual quicksort (tests recursion + array mutation)
function qsort(arr, lo, hi) {
var i = lo
var j = hi
var pivot = arr[floor((lo + hi) / 2)]
var tmp = 0
if (lo >= hi) return null
while (i <= j) {
while (arr[i] < pivot) i++
while (arr[j] > pivot) j--
if (i <= j) {
tmp = arr[i]
arr[i] = arr[j]
arr[j] = tmp
i++
j--
}
}
if (lo < j) qsort(arr, lo, j)
if (i < hi) qsort(arr, i, hi)
return null
}
// Merge sort (tests allocation + array creation)
function msort(arr) {
var n = length(arr)
if (n <= 1) return arr
var mid = floor(n / 2)
var left = msort(array(arr, 0, mid))
var right = msort(array(arr, mid, n))
return merge(left, right)
}
function merge(a, b) {
var result = []
var i = 0
var j = 0
while (i < length(a) && j < length(b)) {
if (a[i] <= b[j]) {
push(result, a[i])
i++
} else {
push(result, b[j])
j++
}
}
while (i < length(a)) {
push(result, a[i])
i++
}
while (j < length(b)) {
push(result, b[j])
j++
}
return result
}
// Binary search
function bsearch(arr, target) {
var lo = 0
var hi = length(arr) - 1
var mid = 0
while (lo <= hi) {
mid = floor((lo + hi) / 2)
if (arr[mid] == target) return mid
if (arr[mid] < target) lo = mid + 1
else hi = mid - 1
}
return -1
}
// Sort objects by field
function sort_records(n) {
var records = []
var x = 42
var i = 0
for (i = 0; i < n; i++) {
x = ((x * 1103515245 + 12345) & 0x7FFFFFFF) | 0
push(records, {id: i, score: x % 10000, name: `item_${i}`})
}
return sort(records, "score")
}
return {
// Quicksort 1K random integers
qsort_1k: function(n) {
var i = 0
var a = null
for (i = 0; i < n; i++) {
a = make_random_array(1000, i)
qsort(a, 0, length(a) - 1)
}
return a
},
// Quicksort 10K random integers
qsort_10k: function(n) {
var i = 0
var a = null
for (i = 0; i < n; i++) {
a = make_random_array(10000, i)
qsort(a, 0, length(a) - 1)
}
return a
},
// Merge sort 1K (allocation heavy)
msort_1k: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = msort(make_random_array(1000, i))
}
return result
},
// Built-in sort 1K
builtin_sort_1k: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = sort(make_random_array(1000, i))
}
return result
},
// Sort worst case (descending → ascending)
sort_worst_case: function(n) {
var i = 0
var a = null
for (i = 0; i < n; i++) {
a = make_descending(1000)
qsort(a, 0, length(a) - 1)
}
return a
},
// Binary search in sorted array
bsearch_1k: function(n) {
var sorted = make_random_array(1000, 42)
sorted = sort(sorted)
var found = 0
var i = 0
for (i = 0; i < n; i++) {
if (bsearch(sorted, sorted[i % 1000]) >= 0) found++
}
return found
},
// Sort records by field
sort_records_500: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = sort_records(500)
}
return result
}
}

82
benches/spectral_norm.cm Normal file
View File

@@ -0,0 +1,82 @@
// spectral_norm.cm — Spectral norm kernel
// Pure numeric, dense array access, mathematical computation.
var math = use('math/radians')
function eval_a(i, j) {
return 1.0 / ((i + j) * (i + j + 1) / 2 + i + 1)
}
function eval_a_times_u(n, u, au) {
var i = 0
var j = 0
var sum = 0
for (i = 0; i < n; i++) {
sum = 0
for (j = 0; j < n; j++) {
sum += eval_a(i, j) * u[j]
}
au[i] = sum
}
}
function eval_at_times_u(n, u, atu) {
var i = 0
var j = 0
var sum = 0
for (i = 0; i < n; i++) {
sum = 0
for (j = 0; j < n; j++) {
sum += eval_a(j, i) * u[j]
}
atu[i] = sum
}
}
function eval_ata_times_u(n, u, atau) {
var v = array(n, 0)
eval_a_times_u(n, u, v)
eval_at_times_u(n, v, atau)
}
function spectral_norm(n) {
var u = array(n, 1)
var v = array(n, 0)
var i = 0
var vbv = 0
var vv = 0
for (i = 0; i < 10; i++) {
eval_ata_times_u(n, u, v)
eval_ata_times_u(n, v, u)
}
vbv = 0
vv = 0
for (i = 0; i < n; i++) {
vbv += u[i] * v[i]
vv += v[i] * v[i]
}
return math.sqrt(vbv / vv)
}
return {
spectral_100: function(n) {
var i = 0
var result = 0
for (i = 0; i < n; i++) {
result = spectral_norm(100)
}
return result
},
spectral_200: function(n) {
var i = 0
var result = 0
for (i = 0; i < n; i++) {
result = spectral_norm(200)
}
return result
}
}

View File

@@ -0,0 +1,188 @@
// string_processing.cm — String-heavy kernel
// Concat, split, search, replace, interning path stress.
function make_lorem(paragraphs) {
var base = "Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod tempor incididunt ut labore et dolore magna aliqua Ut enim ad minim veniam quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat"
var result = ""
var i = 0
for (i = 0; i < paragraphs; i++) {
if (i > 0) result = result + " "
result = result + base
}
return result
}
// Build a lookup table from text
function build_index(txt) {
var words = array(txt, " ")
var index = {}
var i = 0
var w = null
for (i = 0; i < length(words); i++) {
w = words[i]
if (!index[w]) {
index[w] = []
}
push(index[w], i)
}
return index
}
// Levenshtein-like distance (simplified)
function edit_distance(a, b) {
var la = length(a)
var lb = length(b)
if (la == 0) return lb
if (lb == 0) return la
// Use flat array for 2 rows of DP matrix
var prev = array(lb + 1, 0)
var curr = array(lb + 1, 0)
var i = 0
var j = 0
var cost = 0
var del = 0
var ins = 0
var sub = 0
var tmp = null
var ca = array(a)
var cb = array(b)
for (j = 0; j <= lb; j++) prev[j] = j
for (i = 1; i <= la; i++) {
curr[0] = i
for (j = 1; j <= lb; j++) {
cost = ca[i - 1] == cb[j - 1] ? 0 : 1
del = prev[j] + 1
ins = curr[j - 1] + 1
sub = prev[j - 1] + cost
curr[j] = del
if (ins < curr[j]) curr[j] = ins
if (sub < curr[j]) curr[j] = sub
}
tmp = prev
prev = curr
curr = tmp
}
return prev[lb]
}
var lorem_5 = make_lorem(5)
var lorem_20 = make_lorem(20)
return {
// Split text into words and count
string_split_count: function(n) {
var i = 0
var words = null
var count = 0
for (i = 0; i < n; i++) {
words = array(lorem_5, " ")
count += length(words)
}
return count
},
// Build word index (split + hash + array ops)
string_index_build: function(n) {
var i = 0
var idx = null
for (i = 0; i < n; i++) {
idx = build_index(lorem_5)
}
return idx
},
// Search for substrings
string_search: function(n) {
var targets = ["dolor", "minim", "quis", "magna", "ipsum"]
var i = 0
var j = 0
var count = 0
for (i = 0; i < n; i++) {
for (j = 0; j < length(targets); j++) {
if (search(lorem_20, targets[j])) count++
}
}
return count
},
// Replace operations
string_replace: function(n) {
var i = 0
var result = null
for (i = 0; i < n; i++) {
result = replace(lorem_5, "dolor", "DOLOR")
result = replace(result, "ipsum", "IPSUM")
result = replace(result, "amet", "AMET")
}
return result
},
// String concatenation builder
string_builder: function(n) {
var i = 0
var j = 0
var s = null
var total = 0
for (i = 0; i < n; i++) {
s = ""
for (j = 0; j < 50; j++) {
s = s + "key=" + text(j) + "&value=" + text(j * 17) + "&"
}
total += length(s)
}
return total
},
// Edit distance (DP + array + string ops)
edit_distance: function(n) {
var words = ["kitten", "sitting", "saturday", "sunday", "intention", "execution"]
var i = 0
var j = 0
var total = 0
for (i = 0; i < n; i++) {
for (j = 0; j < length(words) - 1; j++) {
total += edit_distance(words[j], words[j + 1])
}
}
return total
},
// Upper/lower/trim chain
string_transforms: function(n) {
var src = " Hello World "
var i = 0
var x = 0
var result = null
for (i = 0; i < n; i++) {
result = trim(src)
result = upper(result)
result = lower(result)
x += length(result)
}
return x
},
// Starts_with / ends_with (interning path)
string_prefix_suffix: function(n) {
var strs = [
"application/json",
"text/html",
"image/png",
"application/xml",
"text/plain"
]
var i = 0
var j = 0
var count = 0
for (i = 0; i < n; i++) {
for (j = 0; j < length(strs); j++) {
if (starts_with(strs[j], "application/")) count++
if (ends_with(strs[j], "/json")) count++
if (starts_with(strs[j], "text/")) count++
}
}
return count
}
}

137
benches/tree_ops.cm Normal file
View File

@@ -0,0 +1,137 @@
// tree_ops.cm — Tree data structure operations kernel
// Pointer chasing, recursion, allocation patterns.
// Binary tree: create, walk, transform, check
function make_tree(depth) {
if (depth <= 0) return {val: 1, left: null, right: null}
return {
val: depth,
left: make_tree(depth - 1),
right: make_tree(depth - 1)
}
}
function tree_check(node) {
if (!node) return 0
if (!node.left) return node.val
return node.val + tree_check(node.left) - tree_check(node.right)
}
function tree_sum(node) {
if (!node) return 0
return node.val + tree_sum(node.left) + tree_sum(node.right)
}
function tree_depth(node) {
if (!node) return 0
var l = tree_depth(node.left)
var r = tree_depth(node.right)
return 1 + (l > r ? l : r)
}
function tree_count(node) {
if (!node) return 0
return 1 + tree_count(node.left) + tree_count(node.right)
}
// Transform tree: map values
function tree_map(node, fn) {
if (!node) return null
return {
val: fn(node.val),
left: tree_map(node.left, fn),
right: tree_map(node.right, fn)
}
}
// Flatten tree to array (in-order)
function tree_flatten(node, result) {
if (!node) return null
tree_flatten(node.left, result)
push(result, node.val)
tree_flatten(node.right, result)
return null
}
// Build sorted tree from array (balanced)
function build_balanced(arr, lo, hi) {
if (lo > hi) return null
var mid = floor((lo + hi) / 2)
return {
val: arr[mid],
left: build_balanced(arr, lo, mid - 1),
right: build_balanced(arr, mid + 1, hi)
}
}
// Find a value in BST
function bst_find(node, val) {
if (!node) return false
if (val == node.val) return true
if (val < node.val) return bst_find(node.left, val)
return bst_find(node.right, val)
}
return {
// Binary tree create + check (allocation heavy)
tree_create_check: function(n) {
var i = 0
var t = null
var x = 0
for (i = 0; i < n; i++) {
t = make_tree(10)
x += tree_check(t)
}
return x
},
// Deep tree traversals
tree_traversal: function(n) {
var t = make_tree(12)
var x = 0
var i = 0
for (i = 0; i < n; i++) {
x += tree_sum(t) + tree_depth(t) + tree_count(t)
}
return x
},
// Tree map (create new tree from old)
tree_transform: function(n) {
var t = make_tree(10)
var i = 0
var mapped = null
for (i = 0; i < n; i++) {
mapped = tree_map(t, function(v) { return v * 2 + 1 })
}
return mapped
},
// Flatten + rebuild (array <-> tree conversion)
tree_flatten_rebuild: function(n) {
var t = make_tree(10)
var i = 0
var flat = null
var rebuilt = null
for (i = 0; i < n; i++) {
flat = []
tree_flatten(t, flat)
rebuilt = build_balanced(flat, 0, length(flat) - 1)
}
return rebuilt
},
// BST search (pointer chasing)
bst_search: function(n) {
// Build a balanced BST of 1024 elements
var data = []
var i = 0
for (i = 0; i < 1024; i++) push(data, i)
var bst = build_balanced(data, 0, 1023)
var found = 0
for (i = 0; i < n; i++) {
if (bst_find(bst, i % 1024)) found++
}
return found
}
}

View File

@@ -1,14 +1,16 @@
function mainThread() {
var maxDepth = max(6, Number(arg[0] || 16));
var stretchDepth = maxDepth + 1;
var check = itemCheck(bottomUpTree(stretchDepth));
var longLivedTree = null
var depth = null
var iterations = null
log.console(`stretch tree of depth ${stretchDepth}\t check: ${check}`);
var longLivedTree = bottomUpTree(maxDepth);
longLivedTree = bottomUpTree(maxDepth);
for (var depth = 4; depth <= maxDepth; depth += 2) {
var iterations = 1 << maxDepth - depth + 4;
for (depth = 4; depth <= maxDepth; depth += 2) {
iterations = 1 << maxDepth - depth + 4;
work(iterations, depth);
}
@@ -17,7 +19,8 @@ function mainThread() {
function work(iterations, depth) {
var check = 0;
for (var i = 0; i < iterations; i++)
var i = 0
for (i = 0; i < iterations; i++)
check += itemCheck(bottomUpTree(depth));
log.console(`${iterations}\t trees of depth ${depth}\t check: ${check}`);
}

View File

@@ -1,6 +1,9 @@
var blob = use('blob')
var math = use('math/radians')
var i = 0
var j = 0
function eratosthenes (n) {
var sieve = blob(n, true)
var sqrtN = whole(math.sqrt(n));
@@ -9,7 +12,7 @@ function eratosthenes (n) {
if (sieve.read_logical(i))
for (j = i * i; j <= n; j += i)
sieve.write_bit(j, false);
return sieve;
}
@@ -17,9 +20,9 @@ var sieve = eratosthenes(10000000);
stone(sieve)
var c = 0
for (var i = 0; i < length(sieve); i++)
for (i = 0; i < length(sieve); i++)
if (sieve.read_logical(i)) c++
log.console(c)
$stop()
$stop()

View File

@@ -1,58 +1,65 @@
function fannkuch(n) {
var perm1 = [n]
for (var i = 0; i < n; i++) perm1[i] = i
var perm1 = [n]
var i = 0
var k = null
var r = null
var t = null
var p0 = null
var j = null
var more = null
for (i = 0; i < n; i++) perm1[i] = i
var perm = [n]
var count = [n]
var f = 0, flips = 0, nperm = 0, checksum = 0
var i, k, r
var count = [n]
var f = 0
var flips = 0
var nperm = 0
var checksum = 0
r = n
while (r > 0) {
i = 0
while (r != 1) { count[r-1] = r; r -= 1 }
while (i < n) { perm[i] = perm1[i]; i += 1 }
// Count flips and update max and checksum
while (r > 0) {
i = 0
while (r != 1) { count[r-1] = r; r -= 1 }
while (i < n) { perm[i] = perm1[i]; i += 1 }
f = 0
k = perm[0]
k = perm[0]
while (k != 0) {
i = 0
while (2*i < k) {
var t = perm[i]; perm[i] = perm[k-i]; perm[k-i] = t
i += 1
i = 0
while (2*i < k) {
t = perm[i]; perm[i] = perm[k-i]; perm[k-i] = t
i += 1
}
k = perm[0]
f += 1
}
if (f > flips) flips = f
f += 1
}
if (f > flips) flips = f
if ((nperm & 0x1) == 0) checksum += f; else checksum -= f
// Use incremental change to generate another permutation
var more = true
while (more) {
more = true
while (more) {
if (r == n) {
log.console( checksum )
return flips
log.console( checksum )
return flips
}
var p0 = perm1[0]
p0 = perm1[0]
i = 0
while (i < r) {
var j = i + 1
j = i + 1
perm1[i] = perm1[j]
i = j
i = j
}
perm1[r] = p0
count[r] -= 1
if (count[r] > 0) more = false; else r += 1
perm1[r] = p0
count[r] -= 1
if (count[r] > 0) more = false; else r += 1
}
nperm += 1
}
return flips;
return flips;
}
var n = arg[0] || 10
log.console(`Pfannkuchen(${n}) = ${fannkuch(n)}`)
$stop()

View File

@@ -1,22 +1,12 @@
var time = use('time')
var math = use('math/radians')
////////////////////////////////////////////////////////////////////////////////
// JavaScript Performance Benchmark Suite
// Tests core JS operations: property access, function calls, arithmetic, etc.
////////////////////////////////////////////////////////////////////////////////
// Test configurations
def iterations = {
simple: 10000000,
medium: 1000000,
complex: 100000
};
////////////////////////////////////////////////////////////////////////////////
// Utility: measureTime(fn) => how long fn() takes in seconds
////////////////////////////////////////////////////////////////////////////////
function measureTime(fn) {
var start = time.number();
fn();
@@ -24,26 +14,24 @@ function measureTime(fn) {
return (end - start);
}
////////////////////////////////////////////////////////////////////////////////
// Benchmark: Property Access
////////////////////////////////////////////////////////////////////////////////
function benchPropertyAccess() {
var obj = {
a: 1, b: 2, c: 3, d: 4, e: 5,
nested: { x: 10, y: 20, z: 30 }
};
var readTime = measureTime(function() {
var sum = 0;
for (var i = 0; i < iterations.simple; i++) {
var i = 0
for (i = 0; i < iterations.simple; i++) {
sum += obj.a + obj.b + obj.c + obj.d + obj.e;
sum += obj.nested.x + obj.nested.y + obj.nested.z;
}
});
var writeTime = measureTime(function() {
for (var i = 0; i < iterations.simple; i++) {
var i = 0
for (i = 0; i < iterations.simple; i++) {
obj.a = i;
obj.b = i + 1;
obj.c = i + 2;
@@ -51,49 +39,48 @@ function benchPropertyAccess() {
obj.nested.y = i * 3;
}
});
return { readTime: readTime, writeTime: writeTime };
}
////////////////////////////////////////////////////////////////////////////////
// Benchmark: Function Calls
////////////////////////////////////////////////////////////////////////////////
function benchFunctionCalls() {
function add(a, b) { return a + b; }
function multiply(a, b) { return a * b; }
function complexCalc(a, b, c) { return (a + b) * c / 2; }
var obj = {
method: function(x) { return x * 2; },
nested: {
deepMethod: function(x, y) { return x + y; }
}
};
var simpleCallTime = measureTime(function() {
var result = 0;
for (var i = 0; i < iterations.simple; i++) {
var i = 0
for (i = 0; i < iterations.simple; i++) {
result = add(i, 1);
result = multiply(result, 2);
}
});
var methodCallTime = measureTime(function() {
var result = 0;
for (var i = 0; i < iterations.simple; i++) {
var i = 0
for (i = 0; i < iterations.simple; i++) {
result = obj.method(i);
result = obj.nested.deepMethod(result, i);
}
});
var complexCallTime = measureTime(function() {
var result = 0;
for (var i = 0; i < iterations.medium; i++) {
var i = 0
for (i = 0; i < iterations.medium; i++) {
result = complexCalc(i, i + 1, i + 2);
}
});
return {
simpleCallTime: simpleCallTime,
methodCallTime: methodCallTime,
@@ -101,37 +88,39 @@ function benchFunctionCalls() {
};
}
////////////////////////////////////////////////////////////////////////////////
// Benchmark: Array Operations
////////////////////////////////////////////////////////////////////////////////
function benchArrayOps() {
var i = 0
var pushTime = measureTime(function() {
var arr = [];
for (var i = 0; i < iterations.medium; i++) {
push(arr, i);
var j = 0
for (j = 0; j < iterations.medium; j++) {
push(arr, j);
}
});
var arr = [];
for (var i = 0; i < 10000; i++) push(arr, i);
for (i = 0; i < 10000; i++) push(arr, i);
var accessTime = measureTime(function() {
var sum = 0;
for (var i = 0; i < iterations.medium; i++) {
sum += arr[i % 10000];
var j = 0
for (j = 0; j < iterations.medium; j++) {
sum += arr[j % 10000];
}
});
var iterateTime = measureTime(function() {
var sum = 0;
for (var j = 0; j < 1000; j++) {
for (var i = 0; i < length(arr); i++) {
sum += arr[i];
var j = 0
var k = 0
for (j = 0; j < 1000; j++) {
for (k = 0; k < length(arr); k++) {
sum += arr[k];
}
}
});
return {
pushTime: pushTime,
accessTime: accessTime,
@@ -139,27 +128,27 @@ function benchArrayOps() {
};
}
////////////////////////////////////////////////////////////////////////////////
// Benchmark: Object Creation
////////////////////////////////////////////////////////////////////////////////
function benchObjectCreation() {
var literalTime = measureTime(function() {
for (var i = 0; i < iterations.medium; i++) {
var obj = { x: i, y: i * 2, z: i * 3 };
var i = 0
var obj = null
for (i = 0; i < iterations.medium; i++) {
obj = { x: i, y: i * 2, z: i * 3 };
}
});
function Point(x, y) {
return {x,y}
}
var defructorTime = measureTime(function() {
for (var i = 0; i < iterations.medium; i++) {
var p = Point(i, i * 2);
var i = 0
var p = null
for (i = 0; i < iterations.medium; i++) {
p = Point(i, i * 2);
}
});
var protoObj = {
x: 0,
y: 0,
@@ -168,15 +157,17 @@ function benchObjectCreation() {
this.y += dy;
}
};
var prototypeTime = measureTime(function() {
for (var i = 0; i < iterations.medium; i++) {
var obj = meme(protoObj);
var i = 0
var obj = null
for (i = 0; i < iterations.medium; i++) {
obj = meme(protoObj);
obj.x = i;
obj.y = i * 2;
}
});
return {
literalTime: literalTime,
defructorTime: defructorTime,
@@ -184,36 +175,39 @@ function benchObjectCreation() {
};
}
////////////////////////////////////////////////////////////////////////////////
// Benchmark: String Operations
////////////////////////////////////////////////////////////////////////////////
function benchStringOps() {
var i = 0
var strings = [];
var concatTime = measureTime(function() {
var str = "";
for (var i = 0; i < iterations.complex; i++) {
str = "test" + i + "value";
var j = 0
for (j = 0; j < iterations.complex; j++) {
str = "test" + j + "value";
}
});
var strings = [];
for (var i = 0; i < 1000; i++) {
for (i = 0; i < 1000; i++) {
push(strings, "string" + i);
}
var joinTime = measureTime(function() {
for (var i = 0; i < iterations.complex; i++) {
var result = text(strings, ",");
var j = 0
var result = null
for (j = 0; j < iterations.complex; j++) {
result = text(strings, ",");
}
});
var splitTime = measureTime(function() {
var str = "a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p";
for (var i = 0; i < iterations.medium; i++) {
var parts = array(str, ",");
var j = 0
var parts = null
for (j = 0; j < iterations.medium; j++) {
parts = array(str, ",");
}
});
return {
concatTime: concatTime,
joinTime: joinTime,
@@ -221,35 +215,34 @@ function benchStringOps() {
};
}
////////////////////////////////////////////////////////////////////////////////
// Benchmark: Arithmetic Operations
////////////////////////////////////////////////////////////////////////////////
function benchArithmetic() {
var intMathTime = measureTime(function() {
var result = 1;
for (var i = 0; i < iterations.simple; i++) {
var i = 0
for (i = 0; i < iterations.simple; i++) {
result = ((result + i) * 2 - 1) / 3;
result = result % 1000 + 1;
}
});
var floatMathTime = measureTime(function() {
var result = 1.5;
for (var i = 0; i < iterations.simple; i++) {
var i = 0
for (i = 0; i < iterations.simple; i++) {
result = math.sine(result) + math.cosine(i * 0.01);
result = math.sqrt(abs(result)) + 0.1;
}
});
var bitwiseTime = measureTime(function() {
var result = 0;
for (var i = 0; i < iterations.simple; i++) {
var i = 0
for (i = 0; i < iterations.simple; i++) {
result = (result ^ i) & 0xFFFF;
result = (result << 1) | (result >> 15);
}
});
return {
intMathTime: intMathTime,
floatMathTime: floatMathTime,
@@ -257,134 +250,123 @@ function benchArithmetic() {
};
}
////////////////////////////////////////////////////////////////////////////////
// Benchmark: Closure Operations
////////////////////////////////////////////////////////////////////////////////
function benchClosures() {
var i = 0
function makeAdder(x) {
return function(y) { return x + y; };
}
var closureCreateTime = measureTime(function() {
var funcs = [];
for (var i = 0; i < iterations.medium; i++) {
push(funcs, makeAdder(i));
var j = 0
for (j = 0; j < iterations.medium; j++) {
push(funcs, makeAdder(j));
}
});
var adders = [];
for (var i = 0; i < 1000; i++) {
for (i = 0; i < 1000; i++) {
push(adders, makeAdder(i));
}
var closureCallTime = measureTime(function() {
var sum = 0;
for (var i = 0; i < iterations.medium; i++) {
sum += adders[i % 1000](i);
var j = 0
for (j = 0; j < iterations.medium; j++) {
sum += adders[j % 1000](j);
}
});
return {
closureCreateTime: closureCreateTime,
closureCallTime: closureCallTime
};
}
////////////////////////////////////////////////////////////////////////////////
// Main benchmark runner
////////////////////////////////////////////////////////////////////////////////
log.console("JavaScript Performance Benchmark");
log.console("======================\n");
// Property Access
log.console("BENCHMARK: Property Access");
var propResults = benchPropertyAccess();
log.console(" Read time: " + propResults.readTime.toFixed(3) + "s => " +
log.console(" Read time: " + propResults.readTime.toFixed(3) + "s => " +
(iterations.simple / propResults.readTime).toFixed(1) + " reads/sec [" +
(propResults.readTime / iterations.simple * 1e9).toFixed(1) + " ns/op]");
log.console(" Write time: " + propResults.writeTime.toFixed(3) + "s => " +
log.console(" Write time: " + propResults.writeTime.toFixed(3) + "s => " +
(iterations.simple / propResults.writeTime).toFixed(1) + " writes/sec [" +
(propResults.writeTime / iterations.simple * 1e9).toFixed(1) + " ns/op]");
log.console("");
// Function Calls
log.console("BENCHMARK: Function Calls");
var funcResults = benchFunctionCalls();
log.console(" Simple calls: " + funcResults.simpleCallTime.toFixed(3) + "s => " +
log.console(" Simple calls: " + funcResults.simpleCallTime.toFixed(3) + "s => " +
(iterations.simple / funcResults.simpleCallTime).toFixed(1) + " calls/sec [" +
(funcResults.simpleCallTime / iterations.simple * 1e9).toFixed(1) + " ns/op]");
log.console(" Method calls: " + funcResults.methodCallTime.toFixed(3) + "s => " +
log.console(" Method calls: " + funcResults.methodCallTime.toFixed(3) + "s => " +
(iterations.simple / funcResults.methodCallTime).toFixed(1) + " calls/sec [" +
(funcResults.methodCallTime / iterations.simple * 1e9).toFixed(1) + " ns/op]");
log.console(" Complex calls: " + funcResults.complexCallTime.toFixed(3) + "s => " +
log.console(" Complex calls: " + funcResults.complexCallTime.toFixed(3) + "s => " +
(iterations.medium / funcResults.complexCallTime).toFixed(1) + " calls/sec [" +
(funcResults.complexCallTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console("");
// Array Operations
log.console("BENCHMARK: Array Operations");
var arrayResults = benchArrayOps();
log.console(" Push: " + arrayResults.pushTime.toFixed(3) + "s => " +
log.console(" Push: " + arrayResults.pushTime.toFixed(3) + "s => " +
(iterations.medium / arrayResults.pushTime).toFixed(1) + " pushes/sec [" +
(arrayResults.pushTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console(" Access: " + arrayResults.accessTime.toFixed(3) + "s => " +
log.console(" Access: " + arrayResults.accessTime.toFixed(3) + "s => " +
(iterations.medium / arrayResults.accessTime).toFixed(1) + " accesses/sec [" +
(arrayResults.accessTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console(" Iterate: " + arrayResults.iterateTime.toFixed(3) + "s => " +
log.console(" Iterate: " + arrayResults.iterateTime.toFixed(3) + "s => " +
(1000 / arrayResults.iterateTime).toFixed(1) + " full iterations/sec");
log.console("");
// Object Creation
log.console("BENCHMARK: Object Creation");
var objResults = benchObjectCreation();
log.console(" Literal: " + objResults.literalTime.toFixed(3) + "s => " +
log.console(" Literal: " + objResults.literalTime.toFixed(3) + "s => " +
(iterations.medium / objResults.literalTime).toFixed(1) + " creates/sec [" +
(objResults.literalTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console(" Constructor: " + objResults.defructorTime.toFixed(3) + "s => " +
log.console(" Constructor: " + objResults.defructorTime.toFixed(3) + "s => " +
(iterations.medium / objResults.defructorTime).toFixed(1) + " creates/sec [" +
(objResults.defructorTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console(" Prototype: " + objResults.prototypeTime.toFixed(3) + "s => " +
log.console(" Prototype: " + objResults.prototypeTime.toFixed(3) + "s => " +
(iterations.medium / objResults.prototypeTime).toFixed(1) + " creates/sec [" +
(objResults.prototypeTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console("");
// String Operations
log.console("BENCHMARK: String Operations");
var strResults = benchStringOps();
log.console(" Concat: " + strResults.concatTime.toFixed(3) + "s => " +
log.console(" Concat: " + strResults.concatTime.toFixed(3) + "s => " +
(iterations.complex / strResults.concatTime).toFixed(1) + " concats/sec [" +
(strResults.concatTime / iterations.complex * 1e9).toFixed(1) + " ns/op]");
log.console(" Join: " + strResults.joinTime.toFixed(3) + "s => " +
log.console(" Join: " + strResults.joinTime.toFixed(3) + "s => " +
(iterations.complex / strResults.joinTime).toFixed(1) + " joins/sec [" +
(strResults.joinTime / iterations.complex * 1e9).toFixed(1) + " ns/op]");
log.console(" Split: " + strResults.splitTime.toFixed(3) + "s => " +
log.console(" Split: " + strResults.splitTime.toFixed(3) + "s => " +
(iterations.medium / strResults.splitTime).toFixed(1) + " splits/sec [" +
(strResults.splitTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console("");
// Arithmetic Operations
log.console("BENCHMARK: Arithmetic Operations");
var mathResults = benchArithmetic();
log.console(" Integer math: " + mathResults.intMathTime.toFixed(3) + "s => " +
log.console(" Integer math: " + mathResults.intMathTime.toFixed(3) + "s => " +
(iterations.simple / mathResults.intMathTime).toFixed(1) + " ops/sec [" +
(mathResults.intMathTime / iterations.simple * 1e9).toFixed(1) + " ns/op]");
log.console(" Float math: " + mathResults.floatMathTime.toFixed(3) + "s => " +
log.console(" Float math: " + mathResults.floatMathTime.toFixed(3) + "s => " +
(iterations.simple / mathResults.floatMathTime).toFixed(1) + " ops/sec [" +
(mathResults.floatMathTime / iterations.simple * 1e9).toFixed(1) + " ns/op]");
log.console(" Bitwise: " + mathResults.bitwiseTime.toFixed(3) + "s => " +
log.console(" Bitwise: " + mathResults.bitwiseTime.toFixed(3) + "s => " +
(iterations.simple / mathResults.bitwiseTime).toFixed(1) + " ops/sec [" +
(mathResults.bitwiseTime / iterations.simple * 1e9).toFixed(1) + " ns/op]");
log.console("");
// Closures
log.console("BENCHMARK: Closures");
var closureResults = benchClosures();
log.console(" Create: " + closureResults.closureCreateTime.toFixed(3) + "s => " +
log.console(" Create: " + closureResults.closureCreateTime.toFixed(3) + "s => " +
(iterations.medium / closureResults.closureCreateTime).toFixed(1) + " creates/sec [" +
(closureResults.closureCreateTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console(" Call: " + closureResults.closureCallTime.toFixed(3) + "s => " +
log.console(" Call: " + closureResults.closureCallTime.toFixed(3) + "s => " +
(iterations.medium / closureResults.closureCallTime).toFixed(1) + " calls/sec [" +
(closureResults.closureCallTime / iterations.medium * 1e9).toFixed(1) + " ns/op]");
log.console("");

View File

@@ -1,40 +1,46 @@
var blob = use('blob')
var iter = 50, limit = 2.0;
var zr, zi, cr, ci, tr, ti;
var iter = 50
var limit = 2.0
var zr = null
var zi = null
var cr = null
var ci = null
var tr = null
var ti = null
var y = 0
var x = 0
var i = 0
var row = null
var h = Number(arg[0]) || 500
var w = h
log.console(`P4\n${w} ${h}`);
for (var y = 0; y < h; ++y) {
// Create a blob for the row - we need w bits
var row = blob(w);
for (y = 0; y < h; ++y) {
row = blob(w);
for (var x = 0; x < w; ++x) {
zr = zi = tr = ti = 0;
for (x = 0; x < w; ++x) {
zr = 0; zi = 0; tr = 0; ti = 0;
cr = 2 * x / w - 1.5;
ci = 2 * y / h - 1;
for (var i = 0; i < iter && (tr + ti <= limit * limit); ++i) {
for (i = 0; i < iter && (tr + ti <= limit * limit); ++i) {
zi = 2 * zr * zi + ci;
zr = tr - ti + cr;
tr = zr * zr;
ti = zi * zi;
}
// Write a 1 bit if inside the set, 0 if outside
if (tr + ti <= limit * limit)
row.write_bit(1);
else
row.write_bit(0);
}
// Convert the blob to stone (immutable) to prepare for output
stone(row)
// Output the blob data as raw bytes
log.console(text(row, 'b'));
log.console(text(row, 'b'));
}
$stop()

View File

@@ -1,9 +1,12 @@
var math = use('math/radians')
var N = 1000000;
var num = 0;
for (var i = 0; i < N; i ++) {
var x = 2 * $random();
var y = $random();
var i = 0
var x = null
var y = null
for (i = 0; i < N; i++) {
x = 2 * $random();
y = $random();
if (y < math.sine(x * x))
num++;
}

View File

@@ -2,60 +2,60 @@ var math = use('math/radians')
var SOLAR_MASS = 4 * pi * pi;
var DAYS_PER_YEAR = 365.24;
function Body(x, y, z, vx, vy, vz, mass) {
return {x, y, z, vx, vy, vz, mass};
function Body(p) {
return {x: p.x, y: p.y, z: p.z, vx: p.vx, vy: p.vy, vz: p.vz, mass: p.mass};
}
function Jupiter() {
return Body(
4.84143144246472090e+00,
-1.16032004402742839e+00,
-1.03622044471123109e-01,
1.66007664274403694e-03 * DAYS_PER_YEAR,
7.69901118419740425e-03 * DAYS_PER_YEAR,
-6.90460016972063023e-05 * DAYS_PER_YEAR,
9.54791938424326609e-04 * SOLAR_MASS
);
return Body({
x: 4.84143144246472090e+00,
y: -1.16032004402742839e+00,
z: -1.03622044471123109e-01,
vx: 1.66007664274403694e-03 * DAYS_PER_YEAR,
vy: 7.69901118419740425e-03 * DAYS_PER_YEAR,
vz: -6.90460016972063023e-05 * DAYS_PER_YEAR,
mass: 9.54791938424326609e-04 * SOLAR_MASS
});
}
function Saturn() {
return Body(
8.34336671824457987e+00,
4.12479856412430479e+00,
-4.03523417114321381e-01,
-2.76742510726862411e-03 * DAYS_PER_YEAR,
4.99852801234917238e-03 * DAYS_PER_YEAR,
2.30417297573763929e-05 * DAYS_PER_YEAR,
2.85885980666130812e-04 * SOLAR_MASS
);
return Body({
x: 8.34336671824457987e+00,
y: 4.12479856412430479e+00,
z: -4.03523417114321381e-01,
vx: -2.76742510726862411e-03 * DAYS_PER_YEAR,
vy: 4.99852801234917238e-03 * DAYS_PER_YEAR,
vz: 2.30417297573763929e-05 * DAYS_PER_YEAR,
mass: 2.85885980666130812e-04 * SOLAR_MASS
});
}
function Uranus() {
return Body(
1.28943695621391310e+01,
-1.51111514016986312e+01,
-2.23307578892655734e-01,
2.96460137564761618e-03 * DAYS_PER_YEAR,
2.37847173959480950e-03 * DAYS_PER_YEAR,
-2.96589568540237556e-05 * DAYS_PER_YEAR,
4.36624404335156298e-05 * SOLAR_MASS
);
return Body({
x: 1.28943695621391310e+01,
y: -1.51111514016986312e+01,
z: -2.23307578892655734e-01,
vx: 2.96460137564761618e-03 * DAYS_PER_YEAR,
vy: 2.37847173959480950e-03 * DAYS_PER_YEAR,
vz: -2.96589568540237556e-05 * DAYS_PER_YEAR,
mass: 4.36624404335156298e-05 * SOLAR_MASS
});
}
function Neptune() {
return Body(
1.53796971148509165e+01,
-2.59193146099879641e+01,
1.79258772950371181e-01,
2.68067772490389322e-03 * DAYS_PER_YEAR,
1.62824170038242295e-03 * DAYS_PER_YEAR,
-9.51592254519715870e-05 * DAYS_PER_YEAR,
5.15138902046611451e-05 * SOLAR_MASS
);
return Body({
x: 1.53796971148509165e+01,
y: -2.59193146099879641e+01,
z: 1.79258772950371181e-01,
vx: 2.68067772490389322e-03 * DAYS_PER_YEAR,
vy: 1.62824170038242295e-03 * DAYS_PER_YEAR,
vz: -9.51592254519715870e-05 * DAYS_PER_YEAR,
mass: 5.15138902046611451e-05 * SOLAR_MASS
});
}
function Sun() {
return Body(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, SOLAR_MASS);
return Body({x: 0.0, y: 0.0, z: 0.0, vx: 0.0, vy: 0.0, vz: 0.0, mass: SOLAR_MASS});
}
var bodies = Array(Sun(), Jupiter(), Saturn(), Uranus(), Neptune());
@@ -65,15 +65,18 @@ function offsetMomentum() {
var py = 0;
var pz = 0;
var size = length(bodies);
for (var i = 0; i < size; i++) {
var body = bodies[i];
var mass = body.mass;
var i = 0
var body = null
var mass = null
for (i = 0; i < size; i++) {
body = bodies[i];
mass = body.mass;
px += body.vx * mass;
py += body.vy * mass;
pz += body.vz * mass;
}
var body = bodies[0];
body = bodies[0];
body.vx = -px / SOLAR_MASS;
body.vy = -py / SOLAR_MASS;
body.vz = -pz / SOLAR_MASS;
@@ -81,27 +84,42 @@ function offsetMomentum() {
function advance(dt) {
var size = length(bodies);
var i = 0
var j = 0
var bodyi = null
var bodyj = null
var vxi = null
var vyi = null
var vzi = null
var dx = null
var dy = null
var dz = null
var d2 = null
var mag = null
var massj = null
var massi = null
var body = null
for (var i = 0; i < size; i++) {
var bodyi = bodies[i];
var vxi = bodyi.vx;
var vyi = bodyi.vy;
var vzi = bodyi.vz;
for (var j = i + 1; j < size; j++) {
var bodyj = bodies[j];
var dx = bodyi.x - bodyj.x;
var dy = bodyi.y - bodyj.y;
var dz = bodyi.z - bodyj.z;
for (i = 0; i < size; i++) {
bodyi = bodies[i];
vxi = bodyi.vx;
vyi = bodyi.vy;
vzi = bodyi.vz;
for (j = i + 1; j < size; j++) {
bodyj = bodies[j];
dx = bodyi.x - bodyj.x;
dy = bodyi.y - bodyj.y;
dz = bodyi.z - bodyj.z;
var d2 = dx * dx + dy * dy + dz * dz;
var mag = dt / (d2 * math.sqrt(d2));
d2 = dx * dx + dy * dy + dz * dz;
mag = dt / (d2 * math.sqrt(d2));
var massj = bodyj.mass;
massj = bodyj.mass;
vxi -= dx * massj * mag;
vyi -= dy * massj * mag;
vzi -= dz * massj * mag;
var massi = bodyi.mass;
massi = bodyi.mass;
bodyj.vx += dx * massi * mag;
bodyj.vy += dy * massi * mag;
bodyj.vz += dz * massi * mag;
@@ -111,8 +129,8 @@ function advance(dt) {
bodyi.vz = vzi;
}
for (var i = 0; i < size; i++) {
var body = bodies[i];
for (i = 0; i < size; i++) {
body = bodies[i];
body.x += dt * body.vx;
body.y += dt * body.vy;
body.z += dt * body.vz;
@@ -122,20 +140,28 @@ function advance(dt) {
function energy() {
var e = 0;
var size = length(bodies);
var i = 0
var j = 0
var bodyi = null
var bodyj = null
var dx = null
var dy = null
var dz = null
var distance = null
for (var i = 0; i < size; i++) {
var bodyi = bodies[i];
for (i = 0; i < size; i++) {
bodyi = bodies[i];
e += 0.5 * bodyi.mass * ( bodyi.vx * bodyi.vx +
e += 0.5 * bodyi.mass * ( bodyi.vx * bodyi.vx +
bodyi.vy * bodyi.vy + bodyi.vz * bodyi.vz );
for (var j = i + 1; j < size; j++) {
var bodyj = bodies[j];
var dx = bodyi.x - bodyj.x;
var dy = bodyi.y - bodyj.y;
var dz = bodyi.z - bodyj.z;
for (j = i + 1; j < size; j++) {
bodyj = bodies[j];
dx = bodyi.x - bodyj.x;
dy = bodyi.y - bodyj.y;
dz = bodyi.z - bodyj.z;
var distance = math.sqrt(dx * dx + dy * dy + dz * dz);
distance = math.sqrt(dx * dx + dy * dy + dz * dz);
e -= (bodyi.mass * bodyj.mass) / distance;
}
}
@@ -143,12 +169,13 @@ function energy() {
}
var n = arg[0] || 100000
var i = 0
offsetMomentum();
log.console(`n = ${n}`)
log.console(energy().toFixed(9))
for (var i = 0; i < n; i++)
for (i = 0; i < n; i++)
advance(0.01);
log.console(energy().toFixed(9))

View File

@@ -1,5 +1,5 @@
var nota = use('nota')
var os = use('os')
var nota = use('internal/nota')
var os = use('internal/os')
var io = use('fd')
var json = use('json')
@@ -7,41 +7,40 @@ var ll = io.slurp('benchmarks/nota.json')
var newarr = []
var accstr = ""
for (var i = 0; i < 10000; i++) {
var i = 0
var start = null
var jll = null
var jsonStr = null
var nll = null
var oll = null
for (i = 0; i < 10000; i++) {
accstr += i;
newarrpush(i.toString())
push(newarr, text(i))
}
// Arrays to store timing results
var jsonDecodeTimes = [];
var jsonEncodeTimes = [];
var notaEncodeTimes = [];
var notaDecodeTimes = [];
var notaSizes = [];
// Run 100 tests
for (var i = 0; i < 100; i++) {
// JSON Decode test
var start = os.now();
var jll = json.decode(ll);
jsonDecodeTimespush((os.now() - start) * 1000);
// JSON Encode test
for (i = 0; i < 100; i++) {
start = os.now();
var jsonStr = JSON.stringify(jll);
jsonEncodeTimespush((os.now() - start) * 1000);
jll = json.decode(ll);
push(jsonDecodeTimes, (os.now() - start) * 1000);
// NOTA Encode test
start = os.now();
var nll = nota.encode(jll);
notaEncodeTimespush((os.now() - start) * 1000);
jsonStr = JSON.stringify(jll);
push(jsonEncodeTimes, (os.now() - start) * 1000);
// NOTA Decode test
start = os.now();
var oll = nota.decode(nll);
notaDecodeTimespush((os.now() - start) * 1000);
nll = nota.encode(jll);
push(notaEncodeTimes, (os.now() - start) * 1000);
start = os.now();
oll = nota.decode(nll);
push(notaDecodeTimes, (os.now() - start) * 1000);
}
// Calculate statistics
function getStats(arr) {
return {
avg: reduce(arr, (a,b) => a+b, 0) / length(arr),
@@ -50,7 +49,6 @@ function getStats(arr) {
};
}
// Pretty print results
log.console("\n== Performance Test Results (100 iterations) ==");
log.console("\nJSON Decoding (ms):");
def jsonDecStats = getStats(jsonDecodeTimes);
@@ -75,4 +73,3 @@ def notaDecStats = getStats(notaDecodeTimes);
log.console(`Average: ${notaDecStats.avg.toFixed(2)} ms`);
log.console(`Min: ${notaDecStats.min.toFixed(2)} ms`);
log.console(`Max: ${notaDecStats.max.toFixed(2)} ms`);

View File

@@ -5,21 +5,27 @@ function A(i,j) {
}
function Au(u,v) {
for (var i=0; i<length(u); ++i) {
var t = 0;
for (var j=0; j<length(u); ++j)
var i = 0
var j = 0
var t = null
for (i = 0; i < length(u); ++i) {
t = 0;
for (j = 0; j < length(u); ++j)
t += A(i,j) * u[j];
v[i] = t;
}
}
function Atu(u,v) {
for (var i=0; i<length(u); ++i) {
var t = 0;
for (var j=0; j<length(u); ++j)
var i = 0
var j = 0
var t = null
for (i = 0; i < length(u); ++i) {
t = 0;
for (j = 0; j < length(u); ++j)
t += A(j,i) * u[j];
v[i] = t;
}
}
@@ -30,20 +36,26 @@ function AtAu(u,v,w) {
}
function spectralnorm(n) {
var i, u=[], v=[], w=[], vv=0, vBv=0;
for (i=0; i<n; ++i)
u[i] = 1; v[i] = w[i] = 0;
var i = 0
var u = []
var v = []
var w = []
var vv = 0
var vBv = 0
for (i = 0; i < n; ++i) {
u[i] = 1; v[i] = 0; w[i] = 0;
}
for (i=0; i<10; ++i) {
for (i = 0; i < 10; ++i) {
AtAu(u,v,w);
AtAu(v,u,w);
}
for (i=0; i<n; ++i) {
for (i = 0; i < n; ++i) {
vBv += u[i]*v[i];
vv += v[i]*v[i];
}
return math.sqrt(vBv/vv);
}

View File

@@ -1,41 +1,22 @@
//
// wota_benchmark.js
//
// Usage in QuickJS:
// qjs wota_benchmark.js
//
// Prerequisite:
var wota = use('wota');
var os = use('os');
// or otherwise ensure `wota` and `os` are available.
// Make sure wota_benchmark.js is loaded after wota.js or combined with it.
//
var wota = use('internal/wota');
var os = use('internal/os');
var i = 0
// Helper to run a function repeatedly and measure total time in seconds.
// Returns elapsed time in seconds.
function measureTime(fn, iterations) {
var t1 = os.now();
for (var i = 0; i < iterations; i++) {
for (i = 0; i < iterations; i++) {
fn();
}
var t2 = os.now();
return t2 - t1;
}
// We'll define a function that does `encode -> decode` for a given value:
function roundTripWota(value) {
var encoded = wota.encode(value);
var decoded = wota.decode(encoded);
// Not doing a deep compare here, just measuring performance.
// (We trust the test suite to verify correctness.)
}
// A small suite of data we want to benchmark. Each entry includes:
// name: label for printing
// data: the test value(s) to encode/decode
// iterations: how many times to loop
//
// You can tweak these as you like for heavier or lighter tests.
def benchmarks = [
{
name: "Small Integers",
@@ -62,22 +43,17 @@ def benchmarks = [
},
{
name: "Large Array (1k numbers)",
// A thousand random numbers
data: [ array(1000, i => i *0.5) ],
iterations: 1000
},
];
// Print a header
log.console("Wota Encode/Decode Benchmark");
log.console("===================\n");
// We'll run each benchmark scenario in turn.
arrfor(benchmarks, function(bench) {
var totalIterations = bench.iterations * length(bench.data);
// We'll define a function that does a roundTrip for *each* data item in bench.data
// to measure in one loop iteration. Then we multiply by bench.iterations.
function runAllData() {
arrfor(bench.data, roundTripWota)
}
@@ -91,5 +67,4 @@ arrfor(benchmarks, function(bench) {
log.console(` Throughput: ${opsPerSec} encode+decode ops/sec\n`);
})
// All done
log.console("Benchmark completed.\n");

View File

@@ -1,18 +1,9 @@
//
// benchmark_wota_nota_json.js
//
// Usage in QuickJS:
// qjs benchmark_wota_nota_json.js <LibraryName> <ScenarioName>
//
// Ensure wota, nota, json, and os are all available, e.g.:
var wota = use('wota');
var nota = use('nota');
var json = use('json');
var jswota = use('jswota')
var os = use('os');
//
var wota = use('internal/wota');
var nota = use('internal/nota');
var json = use('json');
var jswota = use('jswota')
var os = use('internal/os');
// Parse command line arguments
if (length(arg) != 2) {
log.console('Usage: cell benchmark_wota_nota_json.ce <LibraryName> <ScenarioName>');
$stop()
@@ -21,16 +12,11 @@ if (length(arg) != 2) {
var lib_name = arg[0];
var scenario_name = arg[1];
////////////////////////////////////////////////////////////////////////////////
// 1. Setup "libraries" array to easily switch among wota, nota, and json
////////////////////////////////////////////////////////////////////////////////
def libraries = [
{
name: "wota",
encode: wota.encode,
decode: wota.decode,
// wota produces an ArrayBuffer. We'll count `buffer.byteLength` as size.
getSize(encoded) {
return length(encoded);
}
@@ -39,7 +25,6 @@ def libraries = [
name: "nota",
encode: nota.encode,
decode: nota.decode,
// nota also produces an ArrayBuffer:
getSize(encoded) {
return length(encoded);
}
@@ -48,19 +33,12 @@ def libraries = [
name: "json",
encode: json.encode,
decode: json.decode,
// json produces a JS string. We'll measure its UTF-16 code unit length
// as a rough "size". Alternatively, you could convert to UTF-8 for
getSize(encodedStr) {
return length(encodedStr);
}
}
];
////////////////////////////////////////////////////////////////////////////////
// 2. Test data sets (similar to wota benchmarks).
// Each scenario has { name, data, iterations }
////////////////////////////////////////////////////////////////////////////////
def benchmarks = [
{
name: "empty",
@@ -102,42 +80,24 @@ def benchmarks = [
},
];
////////////////////////////////////////////////////////////////////////////////
// 3. Utility: measureTime(fn) => how long fn() takes in seconds.
////////////////////////////////////////////////////////////////////////////////
function measureTime(fn) {
var start = os.now();
fn();
var end = os.now();
return (end - start); // in seconds
return (end - start);
}
////////////////////////////////////////////////////////////////////////////////
// 4. For each library, we run each benchmark scenario and measure:
// - Encoding time (seconds)
// - Decoding time (seconds)
// - Total encoded size (bytes or code units for json)
//
////////////////////////////////////////////////////////////////////////////////
function runBenchmarkForLibrary(lib, bench) {
// We'll encode and decode each item in `bench.data`.
// We do 'bench.iterations' times. Then sum up total time.
// Pre-store the encoded results for all items so we can measure decode time
// in a separate pass. Also measure total size once.
var encodedList = [];
var totalSize = 0;
var i = 0
var j = 0
var e = null
// 1) Measure ENCODING
var encodeTime = measureTime(() => {
for (var i = 0; i < bench.iterations; i++) {
// For each data item, encode it
for (var j = 0; j < length(bench.data); j++) {
var e = lib.encode(bench.data[j]);
// store only in the very first iteration, so we can decode them later
// but do not store them every iteration or we blow up memory.
for (i = 0; i < bench.iterations; i++) {
for (j = 0; j < length(bench.data); j++) {
e = lib.encode(bench.data[j]);
if (i == 0) {
push(encodedList, e);
totalSize += lib.getSize(e);
@@ -146,9 +106,8 @@ function runBenchmarkForLibrary(lib, bench) {
}
});
// 2) Measure DECODING
var decodeTime = measureTime(() => {
for (var i = 0; i < bench.iterations; i++) {
for (i = 0; i < bench.iterations; i++) {
arrfor(encodedList, lib.decode)
}
});
@@ -156,11 +115,6 @@ function runBenchmarkForLibrary(lib, bench) {
return { encodeTime, decodeTime, totalSize };
}
////////////////////////////////////////////////////////////////////////////////
// 5. Main driver: run only the specified library and scenario
////////////////////////////////////////////////////////////////////////////////
// Find the requested library and scenario
var lib = libraries[find(libraries, l => l.name == lib_name)];
var bench = benchmarks[find(benchmarks, b => b.name == scenario_name)];
@@ -176,10 +130,11 @@ if (!bench) {
$stop()
}
// Run the benchmark for this library/scenario combination
var { encodeTime, decodeTime, totalSize } = runBenchmarkForLibrary(lib, bench);
var bench_result = runBenchmarkForLibrary(lib, bench);
var encodeTime = bench_result.encodeTime;
var decodeTime = bench_result.decodeTime;
var totalSize = bench_result.totalSize;
// Output json for easy parsing by hyperfine or other tools
var totalOps = bench.iterations * length(bench.data);
var result = {
lib: lib_name,

1541
boot/bootstrap.cm.mcode Normal file

File diff suppressed because it is too large Load Diff

7115
boot/fold.cm.mcode Normal file

File diff suppressed because one or more lines are too long

14561
boot/mcode.cm.mcode Normal file

File diff suppressed because one or more lines are too long

13290
boot/parse.cm.mcode Normal file

File diff suppressed because one or more lines are too long

16556
boot/streamline.cm.mcode Normal file

File diff suppressed because one or more lines are too long

4573
boot/tokenize.cm.mcode Normal file

File diff suppressed because one or more lines are too long

74
boot_miscompile_bad.cm Normal file
View File

@@ -0,0 +1,74 @@
// boot_miscompile_bad.cm — Documents a boot compiler miscompilation bug.
//
// BUG SUMMARY:
// The boot compiler's optimizer (likely compress_slots, eliminate_moves,
// or infer_param_types) miscompiles a specific pattern when it appears
// inside streamline.cm. The pattern: an array-loaded value used as a
// dynamic index for another array store, inside a guarded block:
//
// sv = instr[j]
// if (is_number(sv) && sv >= 0 && sv < nr_slots) {
// last_ref[sv] = i // <-- miscompiled: sv reads wrong slot
// }
//
// The bug is CONTEXT-DEPENDENT on streamline.cm's exact function/closure
// structure. A standalone module with the same pattern does NOT trigger it.
// The boot optimizer's cross-function analysis (infer_param_types, type
// propagation, etc.) makes different decisions in the full streamline.cm
// context, leading to the miscompilation.
//
// SYMPTOMS:
// - 'log' is not defined (comparison error path fires on non-comparable values)
// - array index must be a number (store_dynamic with corrupted index)
// - Error line has NO reference to 'log' — the reference comes from the
// error-reporting code path of the < operator
// - Non-deterministic: different error messages on different runs
// - NOT a GC bug: persists with --heap 4GB
// - NOT slot overflow: function has only 85 raw slots
//
// TO REPRODUCE:
// In streamline.cm, replace the build_slot_liveness function body with
// this version (raw operand scanning instead of get_slot_refs):
//
// var build_slot_liveness = function(instructions, nr_slots) {
// var last_ref = array(nr_slots, -1)
// var n = length(instructions)
// var i = 0
// var j = 0
// var limit = 0
// var sv = 0
// var instr = null
//
// while (i < n) {
// instr = instructions[i]
// if (is_array(instr)) {
// j = 1
// limit = length(instr) - 2
// while (j < limit) {
// sv = instr[j]
// if (is_number(sv) && sv >= 0 && sv < nr_slots) {
// last_ref[sv] = i
// }
// j = j + 1
// }
// }
// i = i + 1
// }
// return last_ref
// }
//
// Then: rm -rf .cell/build && ./cell --dev vm_suite
//
// WORKAROUND:
// Use get_slot_refs(instr) to iterate only over known slot-reference
// positions. This produces different IR that the boot optimizer handles
// correctly, and is also more semantically correct.
//
// FIXING:
// To find the root cause, compare the boot-compiled bytecodes of
// build_slot_liveness (in the full streamline.cm context) vs the
// source-compiled bytecodes. Use disasm.ce with --optimized to see
// what the source compiler produces. The boot-compiled bytecodes
// would need a C-level MachCode dump to inspect.
return null

148
build.ce
View File

@@ -6,6 +6,7 @@
// cell build <locator> Build dynamic library for specific package
// cell build -t <target> Cross-compile dynamic libraries for target platform
// cell build -b <type> Build type: release (default), debug, or minsize
// cell build --verbose Print resolved flags, commands, and cache status
var build = use('build')
var shop = use('internal/shop')
@@ -15,62 +16,66 @@ var fd = use('fd')
var target = null
var target_package = null
var buildtype = 'release'
var verbose = false
var force_rebuild = false
var dry_run = false
var i = 0
var targets = null
var t = 0
var lib = null
var results = null
var success = 0
var failed = 0
for (var i = 0; i < length(args); i++) {
if (args[i] == '-t' || args[i] == '--target') {
if (i + 1 < length(args)) {
target = args[++i]
} else {
log.error('-t requires a target')
$stop()
}
} else if (args[i] == '-p' || args[i] == '--package') {
// Legacy support for -p flag
if (i + 1 < length(args)) {
target_package = args[++i]
} else {
log.error('-p requires a package name')
$stop()
}
} else if (args[i] == '-b' || args[i] == '--buildtype') {
if (i + 1 < length(args)) {
buildtype = args[++i]
if (buildtype != 'release' && buildtype != 'debug' && buildtype != 'minsize') {
log.error('Invalid buildtype: ' + buildtype + '. Must be release, debug, or minsize')
$stop()
var run = function() {
for (i = 0; i < length(args); i++) {
if (args[i] == '-t' || args[i] == '--target') {
if (i + 1 < length(args)) {
target = args[++i]
} else {
log.error('-t requires a target')
return
}
} else {
log.error('-b requires a buildtype (release, debug, minsize)')
$stop()
} else if (args[i] == '-p' || args[i] == '--package') {
// Legacy support for -p flag
if (i + 1 < length(args)) {
target_package = args[++i]
} else {
log.error('-p requires a package name')
return
}
} else if (args[i] == '-b' || args[i] == '--buildtype') {
if (i + 1 < length(args)) {
buildtype = args[++i]
if (buildtype != 'release' && buildtype != 'debug' && buildtype != 'minsize') {
log.error('Invalid buildtype: ' + buildtype + '. Must be release, debug, or minsize')
return
}
} else {
log.error('-b requires a buildtype (release, debug, minsize)')
return
}
} else if (args[i] == '--force') {
force_rebuild = true
} else if (args[i] == '--verbose' || args[i] == '-v') {
verbose = true
} else if (args[i] == '--dry-run') {
dry_run = true
} else if (args[i] == '--list-targets') {
log.console('Available targets:')
targets = build.list_targets()
for (t = 0; t < length(targets); t++) {
log.console(' ' + targets[t])
}
return
} else if (!starts_with(args[i], '-') && !target_package) {
// Positional argument - treat as package locator
target_package = args[i]
}
} else if (args[i] == '--force') {
force_rebuild = true
} else if (args[i] == '--dry-run') {
dry_run = true
} else if (args[i] == '--list-targets') {
log.console('Available targets:')
var targets = build.list_targets()
for (var t = 0; t < length(targets); t++) {
log.console(' ' + targets[t])
}
$stop()
} else if (!starts_with(args[i], '-') && !target_package) {
// Positional argument - treat as package locator
target_package = args[i]
}
}
// Resolve local paths to absolute paths
if (target_package) {
if (target_package == '.' || starts_with(target_package, './') || starts_with(target_package, '../') || fd.is_dir(target_package)) {
var resolved = fd.realpath(target_package)
if (resolved) {
target_package = resolved
}
}
}
if (target_package)
target_package = shop.resolve_locator(target_package)
// Detect target if not specified
if (!target) {
@@ -78,47 +83,48 @@ if (!target) {
if (target) log.console('Target: ' + target)
}
if (target && !build.has_target(target)) {
log.error('Invalid target: ' + target)
log.console('Available targets: ' + text(build.list_targets(), ', '))
$stop()
}
if (target && !build.has_target(target)) {
log.error('Invalid target: ' + target)
log.console('Available targets: ' + text(build.list_targets(), ', '))
return
}
var packages = shop.list_packages()
log.console('Preparing packages...')
arrfor(packages, function(package) {
if (package == 'core') return
shop.extract(package)
shop.sync(package, {no_build: true})
})
var _build = null
if (target_package) {
// Build single package
log.console('Building ' + target_package + '...')
try {
var lib = build.build_dynamic(target_package, target, buildtype)
_build = function() {
lib = build.build_dynamic(target_package, target, buildtype, {verbose: verbose, force: force_rebuild})
if (lib) {
log.console('Built: ' + lib)
log.console(`Built ${text(length(lib))} module(s)`)
}
} catch (e) {
log.error('Build failed: ' + e)
} disruption {
log.error('Build failed')
$stop()
}
_build()
} else {
// Build all packages
log.console('Building all packages...')
var results = build.build_all_dynamic(target, buildtype)
var success = 0
var failed = 0
for (var i = 0; i < length(results); i++) {
if (results[i].library) {
success++
} else if (results[i].error) {
failed++
results = build.build_all_dynamic(target, buildtype, {verbose: verbose, force: force_rebuild})
success = 0
failed = 0
for (i = 0; i < length(results); i++) {
if (results[i].modules) {
success = success + length(results[i].modules)
}
}
log.console(`Build complete: ${success} libraries built${failed > 0 ? `, ${failed} failed` : ''}`)
}
}
run()
$stop()

1157
build.cm

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,2 @@
[compilation]
CFLAGS = "-Isource -Wno-incompatible-pointer-types -Wno-missing-braces -Wno-strict-prototypes -Wno-unused-function -Wno-int-conversion"
LDFLAGS = "-lstdc++ -lm"
[compilation.macos_arm64]
CFLAGS = "-x objective-c"
LDFLAGS = "-framework CoreFoundation -framework CFNetwork"
[compilation.playdate]
CFLAGS = "-DMINIZ_NO_TIME -DTARGET_EXTENSION -DTARGET_PLAYDATE -I$LOCAL/PlaydateSDK/C_API"
[compilation.windows]
LDFLAGS = "-lws2_32 -lwinmm -liphlpapi -lbcrypt -lwinhttp -static-libgcc -static-libstdc++"

339
cellfs.cm
View File

@@ -1,136 +1,131 @@
var cellfs = {}
// CellFS: A filesystem implementation using miniz and raw OS filesystem
// Supports mounting multiple sources (fs, zip) and named mounts (@name)
var fd = use('fd')
var miniz = use('miniz')
var qop = use('qop')
var wildstar = use('wildstar')
var qop = use('internal/qop')
var wildstar = use('internal/wildstar')
// Internal state
var mounts = [] // Array of {source, type, handle, name}
var mounts = []
var writepath = "."
// Helper to normalize paths
function normalize_path(path) {
if (!path) return ""
// Remove leading/trailing slashes and normalize
return replace(path, /^\/+|\/+$/, "")
}
// Check if a file exists in a specific mount
function mount_exists(mount, path) {
var result = false
var full_path = null
var st = null
var _check = null
if (mount.type == 'zip') {
try {
_check = function() {
mount.handle.mod(path)
return true
} catch (e) {
return false
}
result = true
} disruption {}
_check()
} else if (mount.type == 'qop') {
try {
return mount.handle.stat(path) != null
} catch (e) {
return false
}
} else { // fs
var full_path = fd.join_paths(mount.source, path)
try {
var st = fd.stat(full_path)
return st.isFile || st.isDirectory
} catch (e) {
return false
}
_check = function() {
result = mount.handle.stat(path) != null
} disruption {}
_check()
} else {
full_path = fd.join_paths(mount.source, path)
_check = function() {
st = fd.stat(full_path)
result = st.isFile || st.isDirectory
} disruption {}
_check()
}
return result
}
// Check if a path refers to a directory in a specific mount
function is_directory(path) {
var res = resolve(path)
var mount = res.mount
var result = false
var full_path = null
var st = null
var _check = null
if (mount.type == 'zip') {
try {
return mount.handle.is_directory(path);
} catch (e) {
return false;
}
_check = function() {
result = mount.handle.is_directory(path)
} disruption {}
_check()
} else if (mount.type == 'qop') {
try {
return mount.handle.is_directory(path);
} catch (e) {
return false;
}
} else { // fs
var full_path = fd.join_paths(mount.source, path)
try {
var st = fd.stat(full_path)
return st.isDirectory
} catch (e) {
return false
}
_check = function() {
result = mount.handle.is_directory(path)
} disruption {}
_check()
} else {
full_path = fd.join_paths(mount.source, path)
_check = function() {
st = fd.stat(full_path)
result = st.isDirectory
} disruption {}
_check()
}
return result
}
// Resolve a path to a specific mount and relative path
// Returns { mount, path } or throws/returns null
function resolve(path, must_exist) {
path = normalize_path(path)
// Check for named mount
if (starts_with(path, "@")) {
var idx = search(path, "/")
var mount_name = ""
var rel_path = ""
var idx = null
var mount_name = ""
var rel_path = ""
var mount = null
var found_mount = null
var npath = normalize_path(path)
if (starts_with(npath, "@")) {
idx = search(npath, "/")
if (idx == null) {
mount_name = text(path, 1)
mount_name = text(npath, 1)
rel_path = ""
} else {
mount_name = text(path, 1, idx)
rel_path = text(path, idx + 1)
mount_name = text(npath, 1, idx)
rel_path = text(npath, idx + 1)
}
// Find named mount
var mount = null
arrfor(mounts, function(m) {
if (m.name == mount_name) {
mount = m
return true
}
}, false, true)
if (!mount) {
throw Error("Unknown mount point: @" + mount_name)
log.error("Unknown mount point: @" + mount_name); disrupt
}
return { mount: mount, path: rel_path }
}
// Search path
var found_mount = null
arrfor(mounts, function(mount) {
if (mount_exists(mount, path)) {
found_mount = { mount: mount, path: path }
arrfor(mounts, function(m) {
if (mount_exists(m, npath)) {
found_mount = { mount: m, path: npath }
return true
}
}, false, true)
if (found_mount) {
return found_mount
}
if (must_exist) {
throw Error("File not found in any mount: " + path)
log.error("File not found in any mount: " + npath); disrupt
}
}
// Mount a source
function mount(source, name) {
// Check if source exists
var st = fd.stat(source)
var blob = null
var qop_archive = null
var zip = null
var _try_qop = null
var mount_info = {
source: source,
name: name || null,
@@ -138,74 +133,71 @@ function mount(source, name) {
handle: null,
zip_blob: null
}
if (st.isDirectory) {
mount_info.type = 'fs'
} else if (st.isFile) {
var blob = fd.slurp(source)
// Try QOP first (it's likely faster to fail?) or Zip?
// QOP open checks magic.
var qop_archive = null
try {
qop_archive = qop.open(blob)
} catch(e) {}
blob = fd.slurp(source)
qop_archive = null
_try_qop = function() {
qop_archive = qop.open(blob)
} disruption {}
_try_qop()
if (qop_archive) {
mount_info.type = 'qop'
mount_info.handle = qop_archive
mount_info.zip_blob = blob // keep blob alive
mount_info.zip_blob = blob
} else {
var zip = miniz.read(blob)
zip = miniz.read(blob)
if (!is_object(zip) || !is_function(zip.count)) {
throw Error("Invalid archive file (not zip or qop): " + source)
log.error("Invalid archive file (not zip or qop): " + source); disrupt
}
mount_info.type = 'zip'
mount_info.handle = zip
mount_info.zip_blob = blob // keep blob alive
mount_info.zip_blob = blob
}
} else {
throw Error("Unsupported mount source type: " + source)
log.error("Unsupported mount source type: " + source); disrupt
}
push(mounts, mount_info)
}
// Unmount
function unmount(name_or_source) {
mounts = filter(mounts, function(mount) {
return mount.name != name_or_source && mount.source != name_or_source
mounts = filter(mounts, function(m) {
return m.name != name_or_source && m.source != name_or_source
})
}
// Read file
function slurp(path) {
var res = resolve(path, true)
if (!res) throw Error("File not found: " + path)
var data = null
var full_path = null
if (!res) { log.error("File not found: " + path); disrupt }
if (res.mount.type == 'zip') {
return res.mount.handle.slurp(res.path)
} else if (res.mount.type == 'qop') {
var data = res.mount.handle.read(res.path)
if (!data) throw Error("File not found in qop: " + path)
data = res.mount.handle.read(res.path)
if (!data) { log.error("File not found in qop: " + path); disrupt }
return data
} else {
var full_path = fd.join_paths(res.mount.source, res.path)
full_path = fd.join_paths(res.mount.source, res.path)
return fd.slurp(full_path)
}
}
// Write file
function slurpwrite(path, data) {
var full_path = writepath + "/" + path
var f = fd.open(full_path, 'w')
fd.write(f, data)
fd.close(f)
}
// Check existence
function exists(path) {
var res = resolve(path, false)
if (starts_with(path, "@")) {
@@ -214,29 +206,31 @@ function exists(path) {
return res != null
}
// Stat
function stat(path) {
var res = resolve(path, true)
if (!res) throw Error("File not found: " + path)
var mod = null
var s = null
var full_path = null
if (!res) { log.error("File not found: " + path); disrupt }
if (res.mount.type == 'zip') {
var mod = res.mount.handle.mod(res.path)
mod = res.mount.handle.mod(res.path)
return {
filesize: 0,
filesize: 0,
modtime: mod * 1000,
isDirectory: false
isDirectory: false
}
} else if (res.mount.type == 'qop') {
var s = res.mount.handle.stat(res.path)
if (!s) throw Error("File not found in qop: " + path)
s = res.mount.handle.stat(res.path)
if (!s) { log.error("File not found in qop: " + path); disrupt }
return {
filesize: s.size,
modtime: s.modtime,
isDirectory: s.isDirectory
}
} else {
var full_path = fd.join_paths(res.mount.source, res.path)
var s = fd.stat(full_path)
full_path = fd.join_paths(res.mount.source, res.path)
s = fd.stat(full_path)
return {
filesize: s.size,
modtime: s.mtime,
@@ -245,40 +239,38 @@ function stat(path) {
}
}
// Get search paths
function searchpath() {
return array(mounts)
}
// Mount a package using the shop system
function mount_package(name) {
if (name == null) {
mount('.', null)
return
}
var shop = use('internal/shop')
var dir = shop.get_package_dir(name)
if (!dir) {
throw Error("Package not found: " + name)
log.error("Package not found: " + name); disrupt
}
mount(dir, name)
}
// New functions for qjs_io compatibility
function match(str, pattern) {
return wildstar.match(pattern, str, wildstar.WM_PATHNAME | wildstar.WM_PERIOD | wildstar.WM_WILDSTAR)
}
function rm(path) {
var res = resolve(path, true)
if (res.mount.type != 'fs') throw Error("Cannot delete from non-fs mount")
var full_path = fd.join_paths(res.mount.source, res.path)
var st = fd.stat(full_path)
var full_path = null
var st = null
if (res.mount.type != 'fs') { log.error("Cannot delete from non-fs mount"); disrupt }
full_path = fd.join_paths(res.mount.source, res.path)
st = fd.stat(full_path)
if (st.isDirectory) fd.rmdir(full_path)
else fd.unlink(full_path)
}
@@ -306,55 +298,63 @@ function realdir(path) {
return fd.join_paths(res.mount.source, res.path)
}
function enumerate(path, recurse) {
if (path == null) path = ""
function enumerate(_path, recurse) {
var path = _path == null ? "" : _path
var res = resolve(path, true)
var results = []
var full = null
var st = null
var all = null
var prefix = null
var prefix_len = null
var seen = null
function visit(curr_full, rel_prefix) {
var list = fd.readdir(curr_full)
if (!list) return
arrfor(list, function(item) {
var item_rel = rel_prefix ? rel_prefix + "/" + item : item
var child_st = null
push(results, item_rel)
if (recurse) {
var st = fd.stat(fd.join_paths(curr_full, item))
if (st.isDirectory) {
child_st = fd.stat(fd.join_paths(curr_full, item))
if (child_st.isDirectory) {
visit(fd.join_paths(curr_full, item), item_rel)
}
}
})
}
if (res.mount.type == 'fs') {
var full = fd.join_paths(res.mount.source, res.path)
var st = fd.stat(full)
full = fd.join_paths(res.mount.source, res.path)
st = fd.stat(full)
if (st && st.isDirectory) {
visit(full, "")
}
} else if (res.mount.type == 'qop') {
var all = res.mount.handle.list()
var prefix = res.path ? res.path + "/" : ""
var prefix_len = length(prefix)
// Use a set to avoid duplicates if we are simulating directories
var seen = {}
all = res.mount.handle.list()
prefix = res.path ? res.path + "/" : ""
prefix_len = length(prefix)
seen = {}
arrfor(all, function(p) {
var rel = null
var slash = null
if (starts_with(p, prefix)) {
var rel = text(p, prefix_len)
rel = text(p, prefix_len)
if (length(rel) == 0) return
if (!recurse) {
var slash = search(rel, '/')
slash = search(rel, '/')
if (slash != null) {
rel = text(rel, 0, slash)
}
}
if (!seen[rel]) {
seen[rel] = true
push(results, rel)
@@ -362,15 +362,20 @@ function enumerate(path, recurse) {
}
})
}
return results
}
function globfs(globs, dir) {
if (dir == null) dir = ""
function globfs(globs, _dir) {
var dir = _dir == null ? "" : _dir
var res = resolve(dir, true)
var results = []
var full = null
var st = null
var all = null
var prefix = null
var prefix_len = null
function check_neg(path) {
var result = false
arrfor(globs, function(g) {
@@ -381,7 +386,7 @@ function globfs(globs, dir) {
}, false, true)
return result
}
function check_pos(path) {
var result = false
arrfor(globs, function(g) {
@@ -398,14 +403,14 @@ function globfs(globs, dir) {
var list = fd.readdir(curr_full)
if (!list) return
arrfor(list, function(item) {
var item_rel = rel_prefix ? rel_prefix + "/" + item : item
var child_full = fd.join_paths(curr_full, item)
var st = fd.stat(child_full)
if (st.isDirectory) {
var child_st = fd.stat(child_full)
if (child_st.isDirectory) {
if (!check_neg(item_rel)) {
visit(child_full, item_rel)
}
@@ -416,21 +421,22 @@ function globfs(globs, dir) {
}
})
}
if (res.mount.type == 'fs') {
var full = fd.join_paths(res.mount.source, res.path)
var st = fd.stat(full)
full = fd.join_paths(res.mount.source, res.path)
st = fd.stat(full)
if (st && st.isDirectory) {
visit(full, "")
}
} else if (res.mount.type == 'qop') {
var all = res.mount.handle.list()
var prefix = res.path ? res.path + "/" : ""
var prefix_len = length(prefix)
all = res.mount.handle.list()
prefix = res.path ? res.path + "/" : ""
prefix_len = length(prefix)
arrfor(all, function(p) {
var rel = null
if (starts_with(p, prefix)) {
var rel = text(p, prefix_len)
rel = text(p, prefix_len)
if (length(rel) == 0) return
if (!check_neg(rel) && check_pos(rel)) {
@@ -439,11 +445,10 @@ function globfs(globs, dir) {
}
})
}
return results
}
// Exports
cellfs.mount = mount
cellfs.mount_package = mount_package
cellfs.unmount = unmount

456
cfg.ce Normal file
View File

@@ -0,0 +1,456 @@
// cfg.ce — control flow graph
//
// Usage:
// cell cfg --fn <N|name> <file> Text CFG for function
// cell cfg --dot --fn <N|name> <file> DOT output for graphviz
// cell cfg <file> Text CFG for all functions
var shop = use("internal/shop")
var pad_right = function(s, w) {
var r = s
while (length(r) < w) {
r = r + " "
}
return r
}
var fmt_val = function(v) {
if (is_null(v)) return "null"
if (is_number(v)) return text(v)
if (is_text(v)) return `"${v}"`
if (is_object(v)) return text(v)
if (is_logical(v)) return v ? "true" : "false"
return text(v)
}
var is_jump_op = function(op) {
return op == "jump" || op == "jump_true" || op == "jump_false" || op == "jump_null" || op == "jump_not_null"
}
var is_conditional_jump = function(op) {
return op == "jump_true" || op == "jump_false" || op == "jump_null" || op == "jump_not_null"
}
var is_terminator = function(op) {
return op == "return" || op == "disrupt" || op == "tail_invoke" || op == "goinvoke"
}
var run = function() {
var filename = null
var fn_filter = null
var show_dot = false
var use_optimized = false
var i = 0
var compiled = null
var main_name = null
var fi = 0
var func = null
var fname = null
while (i < length(args)) {
if (args[i] == '--fn') {
i = i + 1
fn_filter = args[i]
} else if (args[i] == '--dot') {
show_dot = true
} else if (args[i] == '--optimized') {
use_optimized = true
} else if (args[i] == '--help' || args[i] == '-h') {
log.console("Usage: cell cfg [--fn <N|name>] [--dot] [--optimized] <file>")
log.console("")
log.console(" --fn <N|name> Filter to function by index or name")
log.console(" --dot Output DOT format for graphviz")
log.console(" --optimized Use optimized IR")
return null
} else if (!starts_with(args[i], '-')) {
filename = args[i]
}
i = i + 1
}
if (!filename) {
log.console("Usage: cell cfg [--fn <N|name>] [--dot] [--optimized] <file>")
return null
}
if (use_optimized) {
compiled = shop.compile_file(filename)
} else {
compiled = shop.mcode_file(filename)
}
var fn_matches = function(index, name) {
var match = null
if (fn_filter == null) return true
if (index >= 0 && fn_filter == text(index)) return true
if (name != null) {
match = search(name, fn_filter)
if (match != null && match >= 0) return true
}
return false
}
var build_cfg = function(func) {
var instrs = func.instructions
var blocks = []
var label_to_block = {}
var pc_to_block = {}
var label_to_pc = {}
var block_start_pcs = {}
var after_terminator = false
var current_block = null
var current_label = null
var pc = 0
var ii = 0
var bi = 0
var instr = null
var op = null
var n = 0
var line_num = null
var blk = null
var last_instr_data = null
var last_op = null
var target_label = null
var target_bi = null
var edge_type = null
if (instrs == null || length(instrs) == 0) return []
// Pass 1: identify block start PCs
block_start_pcs["0"] = true
pc = 0
ii = 0
while (ii < length(instrs)) {
instr = instrs[ii]
if (is_array(instr)) {
op = instr[0]
if (after_terminator) {
block_start_pcs[text(pc)] = true
after_terminator = false
}
if (is_jump_op(op) || is_terminator(op)) {
after_terminator = true
}
pc = pc + 1
}
ii = ii + 1
}
// Pass 2: map labels to PCs and mark as block starts
pc = 0
ii = 0
while (ii < length(instrs)) {
instr = instrs[ii]
if (is_text(instr) && !starts_with(instr, "_nop_")) {
label_to_pc[instr] = pc
block_start_pcs[text(pc)] = true
} else if (is_array(instr)) {
pc = pc + 1
}
ii = ii + 1
}
// Pass 3: build basic blocks
pc = 0
ii = 0
current_label = null
while (ii < length(instrs)) {
instr = instrs[ii]
if (is_text(instr)) {
if (!starts_with(instr, "_nop_")) {
current_label = instr
}
ii = ii + 1
continue
}
if (is_array(instr)) {
if (block_start_pcs[text(pc)]) {
if (current_block != null) {
push(blocks, current_block)
}
current_block = {
id: length(blocks),
label: current_label,
start_pc: pc,
end_pc: pc,
instrs: [],
edges: [],
first_line: null,
last_line: null
}
current_label = null
}
if (current_block != null) {
push(current_block.instrs, {pc: pc, instr: instr})
current_block.end_pc = pc
n = length(instr)
line_num = instr[n - 2]
if (line_num != null) {
if (current_block.first_line == null) {
current_block.first_line = line_num
}
current_block.last_line = line_num
}
}
pc = pc + 1
}
ii = ii + 1
}
if (current_block != null) {
push(blocks, current_block)
}
// Build block index
bi = 0
while (bi < length(blocks)) {
pc_to_block[text(blocks[bi].start_pc)] = bi
if (blocks[bi].label != null) {
label_to_block[blocks[bi].label] = bi
}
bi = bi + 1
}
// Pass 4: compute edges
bi = 0
while (bi < length(blocks)) {
blk = blocks[bi]
if (length(blk.instrs) > 0) {
last_instr_data = blk.instrs[length(blk.instrs) - 1]
last_op = last_instr_data.instr[0]
n = length(last_instr_data.instr)
if (is_jump_op(last_op)) {
if (last_op == "jump") {
target_label = last_instr_data.instr[1]
} else {
target_label = last_instr_data.instr[2]
}
target_bi = label_to_block[target_label]
if (target_bi != null) {
edge_type = "jump"
if (target_bi <= bi) {
edge_type = "loop back-edge"
}
push(blk.edges, {target: target_bi, kind: edge_type})
}
if (is_conditional_jump(last_op)) {
if (bi + 1 < length(blocks)) {
push(blk.edges, {target: bi + 1, kind: "fallthrough"})
}
}
} else if (is_terminator(last_op)) {
push(blk.edges, {target: -1, kind: "EXIT (" + last_op + ")"})
} else {
if (bi + 1 < length(blocks)) {
push(blk.edges, {target: bi + 1, kind: "fallthrough"})
}
}
}
bi = bi + 1
}
return blocks
}
var print_cfg_text = function(blocks, name) {
var bi = 0
var blk = null
var header = null
var ii = 0
var idata = null
var instr = null
var op = null
var n = 0
var parts = null
var j = 0
var operands = null
var ei = 0
var edge = null
var target_label = null
log.compile(`\n=== ${name} ===`)
if (length(blocks) == 0) {
log.compile(" (empty)")
return null
}
bi = 0
while (bi < length(blocks)) {
blk = blocks[bi]
header = ` B${text(bi)}`
if (blk.label != null) {
header = header + ` "${blk.label}"`
}
header = header + ` [pc ${text(blk.start_pc)}-${text(blk.end_pc)}`
if (blk.first_line != null) {
if (blk.first_line == blk.last_line) {
header = header + `, line ${text(blk.first_line)}`
} else {
header = header + `, lines ${text(blk.first_line)}-${text(blk.last_line)}`
}
}
header = header + "]:"
log.compile(header)
ii = 0
while (ii < length(blk.instrs)) {
idata = blk.instrs[ii]
instr = idata.instr
op = instr[0]
n = length(instr)
parts = []
j = 1
while (j < n - 2) {
push(parts, fmt_val(instr[j]))
j = j + 1
}
operands = text(parts, ", ")
log.compile(` ${pad_right(text(idata.pc), 6)}${pad_right(op, 15)}${operands}`)
ii = ii + 1
}
ei = 0
while (ei < length(blk.edges)) {
edge = blk.edges[ei]
if (edge.target == -1) {
log.compile(` -> ${edge.kind}`)
} else {
target_label = blocks[edge.target].label
if (target_label != null) {
log.compile(` -> B${text(edge.target)} "${target_label}" (${edge.kind})`)
} else {
log.compile(` -> B${text(edge.target)} (${edge.kind})`)
}
}
ei = ei + 1
}
log.compile("")
bi = bi + 1
}
return null
}
var print_cfg_dot = function(blocks, name) {
var safe_name = replace(replace(name, '"', '\\"'), ' ', '_')
var bi = 0
var blk = null
var label_text = null
var ii = 0
var idata = null
var instr = null
var op = null
var n = 0
var parts = null
var j = 0
var operands = null
var ei = 0
var edge = null
var style = null
log.compile(`digraph "${safe_name}" {`)
log.compile(" rankdir=TB;")
log.compile(" node [shape=record, fontname=monospace, fontsize=10];")
bi = 0
while (bi < length(blocks)) {
blk = blocks[bi]
label_text = "B" + text(bi)
if (blk.label != null) {
label_text = label_text + " (" + blk.label + ")"
}
label_text = label_text + "\\npc " + text(blk.start_pc) + "-" + text(blk.end_pc)
if (blk.first_line != null) {
label_text = label_text + "\\nline " + text(blk.first_line)
}
label_text = label_text + "|"
ii = 0
while (ii < length(blk.instrs)) {
idata = blk.instrs[ii]
instr = idata.instr
op = instr[0]
n = length(instr)
parts = []
j = 1
while (j < n - 2) {
push(parts, fmt_val(instr[j]))
j = j + 1
}
operands = text(parts, ", ")
label_text = label_text + text(idata.pc) + " " + op + " " + replace(operands, '"', '\\"') + "\\l"
ii = ii + 1
}
log.compile(" B" + text(bi) + " [label=\"{" + label_text + "}\"];")
bi = bi + 1
}
// Edges
bi = 0
while (bi < length(blocks)) {
blk = blocks[bi]
ei = 0
while (ei < length(blk.edges)) {
edge = blk.edges[ei]
if (edge.target >= 0) {
style = ""
if (edge.kind == "loop back-edge") {
style = " [style=bold, color=red, label=\"loop\"]"
} else if (edge.kind == "fallthrough") {
style = " [style=dashed]"
}
log.compile(` B${text(bi)} -> B${text(edge.target)}${style};`)
}
ei = ei + 1
}
bi = bi + 1
}
log.compile("}")
return null
}
var process_function = function(func, name, index) {
var blocks = build_cfg(func)
if (show_dot) {
print_cfg_dot(blocks, name)
} else {
print_cfg_text(blocks, name)
}
return null
}
// Process functions
main_name = compiled.name != null ? compiled.name : "<main>"
if (compiled.main != null) {
if (fn_matches(-1, main_name)) {
process_function(compiled.main, main_name, -1)
}
}
if (compiled.functions != null) {
fi = 0
while (fi < length(compiled.functions)) {
func = compiled.functions[fi]
fname = func.name != null ? func.name : "<anonymous>"
if (fn_matches(fi, fname)) {
process_function(func, `[${text(fi)}] ${fname}`, fi)
}
fi = fi + 1
}
}
return null
}
run()
$stop()

139
clean.ce
View File

@@ -23,40 +23,43 @@ var clean_build = false
var clean_fetch = false
var deep = false
var dry_run = false
var i = 0
var deps = null
for (var i = 0; i < length(args); i++) {
if (args[i] == '--build') {
clean_build = true
} else if (args[i] == '--fetch') {
clean_fetch = true
} else if (args[i] == '--all') {
clean_build = true
clean_fetch = true
} else if (args[i] == '--deep') {
deep = true
} else if (args[i] == '--dry-run') {
dry_run = true
} else if (args[i] == '--help' || args[i] == '-h') {
log.console("Usage: cell clean [<scope>] [options]")
log.console("")
log.console("Remove cached material to force refetch/rebuild.")
log.console("")
log.console("Scopes:")
log.console(" <locator> Clean specific package")
log.console(" shop Clean entire shop")
log.console(" world Clean all world packages")
log.console("")
log.console("Options:")
log.console(" --build Remove build outputs only (default)")
log.console(" --fetch Remove fetched sources only")
log.console(" --all Remove both build outputs and fetched sources")
log.console(" --deep Apply to full dependency closure")
log.console(" --dry-run Show what would be deleted")
$stop()
} else if (!starts_with(args[i], '-')) {
scope = args[i]
var run = function() {
for (i = 0; i < length(args); i++) {
if (args[i] == '--build') {
clean_build = true
} else if (args[i] == '--fetch') {
clean_fetch = true
} else if (args[i] == '--all') {
clean_build = true
clean_fetch = true
} else if (args[i] == '--deep') {
deep = true
} else if (args[i] == '--dry-run') {
dry_run = true
} else if (args[i] == '--help' || args[i] == '-h') {
log.console("Usage: cell clean [<scope>] [options]")
log.console("")
log.console("Remove cached material to force refetch/rebuild.")
log.console("")
log.console("Scopes:")
log.console(" <locator> Clean specific package")
log.console(" shop Clean entire shop")
log.console(" world Clean all world packages")
log.console("")
log.console("Options:")
log.console(" --build Remove build outputs only (default)")
log.console(" --fetch Remove fetched sources only")
log.console(" --all Remove both build outputs and fetched sources")
log.console(" --deep Apply to full dependency closure")
log.console(" --dry-run Show what would be deleted")
return
} else if (!starts_with(args[i], '-')) {
scope = args[i]
}
}
}
// Default to --build if nothing specified
if (!clean_build && !clean_fetch) {
@@ -73,12 +76,7 @@ var is_shop_scope = (scope == 'shop')
var is_world_scope = (scope == 'world')
if (!is_shop_scope && !is_world_scope) {
if (scope == '.' || starts_with(scope, './') || starts_with(scope, '../') || fd.is_dir(scope)) {
var resolved = fd.realpath(scope)
if (resolved) {
scope = resolved
}
}
scope = shop.resolve_locator(scope)
}
var files_to_delete = []
@@ -86,6 +84,7 @@ var dirs_to_delete = []
// Gather packages to clean
var packages_to_clean = []
var _gather = null
if (is_shop_scope) {
packages_to_clean = shop.list_packages()
@@ -97,14 +96,15 @@ if (is_shop_scope) {
push(packages_to_clean, scope)
if (deep) {
try {
var deps = pkg.gather_dependencies(scope)
_gather = function() {
deps = pkg.gather_dependencies(scope)
arrfor(deps, function(dep) {
push(packages_to_clean, dep)
})
} catch (e) {
} disruption {
// Skip if can't read dependencies
}
_gather()
}
}
@@ -114,37 +114,13 @@ var build_dir = shop.get_build_dir()
var packages_dir = replace(shop.get_package_dir(''), /\/$/, '') // Get base packages dir
if (clean_build) {
if (is_shop_scope) {
// Clean entire build and lib directories
if (fd.is_dir(build_dir)) {
push(dirs_to_delete, build_dir)
}
if (fd.is_dir(lib_dir)) {
push(dirs_to_delete, lib_dir)
}
} else {
// Clean specific package libraries
arrfor(packages_to_clean, function(p) {
if (p == 'core') return
var lib_name = shop.lib_name_for_package(p)
var dylib_ext = '.dylib'
var lib_path = lib_dir + '/' + lib_name + dylib_ext
if (fd.is_file(lib_path)) {
push(files_to_delete, lib_path)
}
// Also check for .so and .dll
var so_path = lib_dir + '/' + lib_name + '.so'
var dll_path = lib_dir + '/' + lib_name + '.dll'
if (fd.is_file(so_path)) {
push(files_to_delete, so_path)
}
if (fd.is_file(dll_path)) {
push(files_to_delete, dll_path)
}
})
// Nuke entire build cache (content-addressed, per-package clean impractical)
if (fd.is_dir(build_dir)) {
push(dirs_to_delete, build_dir)
}
// Clean orphaned lib/ directory if it exists (legacy)
if (fd.is_dir(lib_dir)) {
push(dirs_to_delete, lib_dir)
}
}
@@ -168,6 +144,7 @@ if (clean_fetch) {
}
// Execute or report
var deleted_count = 0
if (dry_run) {
log.console("Would delete:")
if (length(files_to_delete) == 0 && length(dirs_to_delete) == 0) {
@@ -181,20 +158,19 @@ if (dry_run) {
})
}
} else {
var deleted_count = 0
arrfor(files_to_delete, function(f) {
try {
var _del = function() {
fd.unlink(f)
log.console("Deleted: " + f)
deleted_count++
} catch (e) {
log.error("Failed to delete " + f + ": " + e)
} disruption {
log.error("Failed to delete " + f)
}
_del()
})
arrfor(dirs_to_delete, function(d) {
try {
var _del = function() {
if (fd.is_link(d)) {
fd.unlink(d)
} else {
@@ -202,9 +178,10 @@ if (dry_run) {
}
log.console("Deleted: " + d)
deleted_count++
} catch (e) {
log.error("Failed to delete " + d + ": " + e)
} disruption {
log.error("Failed to delete " + d)
}
_del()
})
if (deleted_count == 0) {
@@ -214,5 +191,7 @@ if (dry_run) {
log.console("Clean complete: " + text(deleted_count) + " item(s) deleted.")
}
}
}
run()
$stop()

112
clone.ce
View File

@@ -5,117 +5,67 @@ var shop = use('internal/shop')
var link = use('link')
var fd = use('fd')
var http = use('http')
var miniz = use('miniz')
if (length(args) < 2) {
log.console("Usage: cell clone <origin> <path>")
log.console("Clones a cell package to a local path and links it.")
$stop()
return
}
var run = function() {
if (length(args) < 2) {
log.console("Usage: cell clone <origin> <path>")
log.console("Clones a cell package to a local path and links it.")
return
}
var origin = args[0]
var target_path = args[1]
// Resolve target path to absolute
if (target_path == '.' || starts_with(target_path, './') || starts_with(target_path, '../')) {
var resolved = fd.realpath(target_path)
if (resolved) {
target_path = resolved
} else {
// Path doesn't exist yet, resolve relative to cwd
var cwd = fd.realpath('.')
if (target_path == '.') {
target_path = cwd
} else if (starts_with(target_path, './')) {
target_path = cwd + text(target_path, 1)
} else if (starts_with(target_path, '../')) {
// Go up one directory from cwd
var parent = fd.dirname(cwd)
target_path = parent + text(target_path, 2)
}
}
}
target_path = shop.resolve_locator(target_path)
// Check if target already exists
if (fd.is_dir(target_path)) {
log.console("Error: " + target_path + " already exists")
$stop()
return
}
if (fd.is_dir(target_path)) {
log.console("Error: " + target_path + " already exists")
return
}
log.console("Cloning " + origin + " to " + target_path + "...")
// Get the latest commit
var info = shop.resolve_package_info(origin)
if (!info || info == 'local') {
log.console("Error: " + origin + " is not a remote package")
$stop()
return
}
if (!info || info == 'local') {
log.console("Error: " + origin + " is not a remote package")
return
}
// Update to get the commit hash
var update_result = shop.update(origin)
if (!update_result) {
log.console("Error: Could not fetch " + origin)
$stop()
return
}
if (!update_result) {
log.console("Error: Could not fetch " + origin)
return
}
// Fetch and extract to the target path
var lock = shop.load_lock()
var entry = lock[origin]
if (!entry || !entry.commit) {
log.console("Error: No commit found for " + origin)
$stop()
return
}
if (!entry || !entry.commit) {
log.console("Error: No commit found for " + origin)
return
}
var download_url = shop.get_download_url(origin, entry.commit)
log.console("Downloading from " + download_url)
try {
var _clone = function() {
var zip_blob = http.fetch(download_url)
// Extract zip to target path
var zip = miniz.read(zip_blob)
if (!zip) {
log.console("Error: Failed to read zip archive")
$stop()
return
}
// Create target directory
fd.mkdir(target_path)
var count = zip.count()
for (var i = 0; i < count; i++) {
if (zip.is_directory(i)) continue
var filename = zip.get_filename(i)
var first_slash = search(filename, '/')
if (first_slash == null) continue
if (first_slash + 1 >= length(filename)) continue
shop.install_zip(zip_blob, target_path)
var rel_path = text(filename, first_slash + 1)
var full_path = target_path + '/' + rel_path
var dir_path = fd.dirname(full_path)
// Ensure directory exists
if (!fd.is_dir(dir_path)) {
fd.mkdir(dir_path)
}
fd.slurpwrite(full_path, zip.slurp(filename))
}
log.console("Extracted to " + target_path)
// Link the origin to the cloned path
link.add(origin, target_path, shop)
log.console("Linked " + origin + " -> " + target_path)
} catch (e) {
log.console("Error: " + e.message)
if (e.stack) log.console(e.stack)
} disruption {
log.console("Error during clone")
}
_clone()
}
run()
$stop()

130
compare_aot.ce Normal file
View File

@@ -0,0 +1,130 @@
// compare_aot.ce — compile a .ce/.cm file via both paths and compare results
//
// Usage:
// cell --dev compare_aot.ce <file.ce>
var build = use('build')
var fd_mod = use('fd')
var os = use('internal/os')
var json = use('json')
var time = use('time')
var show = function(v) {
if (v == null) return "null"
return json.encode(v)
}
if (length(args) < 1) {
log.compile('usage: cell --dev compare_aot.ce <file>')
return
}
var file = args[0]
if (!fd_mod.is_file(file)) {
if (!ends_with(file, '.ce') && fd_mod.is_file(file + '.ce'))
file = file + '.ce'
else if (!ends_with(file, '.cm') && fd_mod.is_file(file + '.cm'))
file = file + '.cm'
else {
log.error('file not found: ' + file)
return
}
}
var abs = fd_mod.realpath(file)
// Shared compilation front-end — uses raw modules for per-stage timing
var tokenize = use('tokenize')
var parse_mod = use('parse')
var fold = use('fold')
var mcode_mod = use('mcode')
var streamline_mod = use('streamline')
var t0 = time.number()
var src = text(fd_mod.slurp(abs))
var t1 = time.number()
var tok = tokenize(src, abs)
var t2 = time.number()
var ast = parse_mod(tok.tokens, src, abs, tokenize)
var t3 = time.number()
var folded = fold(ast)
var t4 = time.number()
var compiled = mcode_mod(folded)
var t5 = time.number()
var optimized = streamline_mod(compiled)
var t6 = time.number()
log.compile('--- front-end timing ---')
log.compile(' read: ' + text(t1 - t0) + 's')
log.compile(' tokenize: ' + text(t2 - t1) + 's')
log.compile(' parse: ' + text(t3 - t2) + 's')
log.compile(' fold: ' + text(t4 - t3) + 's')
log.compile(' mcode: ' + text(t5 - t4) + 's')
log.compile(' streamline: ' + text(t6 - t5) + 's')
log.compile(' total: ' + text(t6 - t0) + 's')
// Shared env for both paths — only non-intrinsic runtime functions.
// Intrinsics (starts_with, ends_with, logical, some, every, etc.) live on
// the stoned global and are found via GETINTRINSIC/cell_rt_get_intrinsic.
var env = stone({
log: log,
fallback: fallback,
parallel: parallel,
race: race,
sequence: sequence,
use
})
// --- Interpreted (mach VM) ---
var result_interp = null
var interp_ok = false
var run_interp = function() {
log.compile('--- interpreted ---')
var mcode_json = json.encode(optimized)
var mach_blob = mach_compile_mcode_bin(abs, mcode_json)
result_interp = mach_load(mach_blob, env)
interp_ok = true
log.compile('result: ' + show(result_interp))
} disruption {
interp_ok = true
log.compile('(disruption escaped from interpreted run)')
}
run_interp()
// --- Native (AOT via QBE) ---
var result_native = null
var native_ok = false
var run_native = function() {
log.compile('\n--- native ---')
var dylib_path = build.compile_native_ir(optimized, abs, null)
log.compile('dylib: ' + dylib_path)
var handle = os.dylib_open(dylib_path)
if (!handle) {
log.error('failed to open dylib')
return
}
result_native = os.native_module_load(handle, env)
native_ok = true
log.compile('result: ' + show(result_native))
} disruption {
native_ok = true
log.compile('(disruption escaped from native run)')
}
run_native()
// --- Comparison ---
log.compile('\n--- comparison ---')
var s_interp = show(result_interp)
var s_native = show(result_native)
if (interp_ok && native_ok) {
if (s_interp == s_native) {
log.compile('MATCH')
} else {
log.error('MISMATCH')
log.error(' interp: ' + s_interp)
log.error(' native: ' + s_native)
}
} else {
if (!interp_ok) log.error('interpreted run failed')
if (!native_ok) log.error('native run failed')
}

27
compile.ce Normal file
View File

@@ -0,0 +1,27 @@
// compile.ce — compile a .cm or .ce file to native .dylib via QBE
//
// Usage:
// cell compile <file.cm|file.ce>
//
// Installs the dylib to .cell/lib/<pkg>/<stem>.dylib
var shop = use('internal/shop')
var build = use('build')
var fd = use('fd')
if (length(args) < 1) {
log.compile('usage: cell compile <file.cm|file.ce>')
return
}
var file = args[0]
if (!fd.is_file(file)) {
log.error('file not found: ' + file)
return
}
var abs = fd.realpath(file)
var file_info = shop.file_info(abs)
var pkg = file_info.package
build.compile_native(abs, null, null, pkg)

261
config.ce
View File

@@ -47,8 +47,10 @@ function get_nested(obj, path) {
// Set a value in nested object using path
function set_nested(obj, path, value) {
var current = obj
for (var i = 0; i < length(path) - 1; i++) {
var segment = path[i]
var i = 0
var segment = null
for (i = 0; i < length(path) - 1; i++) {
segment = path[i]
if (is_null(current[segment]) || !is_object(current[segment])) {
current[segment] = {}
}
@@ -59,15 +61,17 @@ function set_nested(obj, path, value) {
// Parse value string into appropriate type
function parse_value(str) {
var num_str = null
var n = null
// Boolean
if (str == 'true') return true
if (str == 'false') return false
// Number (including underscores)
var num_str = replace(str, /_/g, '')
if (/^-?\d+$/.test(num_str)) return parseInt(num_str)
if (/^-?\d*\.\d+$/.test(num_str)) return parseFloat(num_str)
// Number
num_str = replace(str, /_/g, '')
n = number(num_str)
if (n != null) return n
// String
return str
}
@@ -75,22 +79,19 @@ function parse_value(str) {
// Format value for display
function format_value(val) {
if (is_text(val)) return '"' + val + '"'
if (is_number(val) && val >= 1000) {
// Add underscores to large numbers
return replace(val.toString(), /\B(?=(\d{3})+(?!\d))/g, '_')
}
return text(val)
}
// Print configuration tree recursively
function print_config(obj, prefix = '') {
function print_config(obj, pfx) {
var p = pfx || ''
arrfor(array(obj), function(key) {
var val = obj[key]
var full_key = prefix ? prefix + '.' + key : key
var full_key = p ? p + '.' + key : key
if (is_object(val))
print_config(val, full_key)
else
else if (!is_null(val))
log.console(full_key + ' = ' + format_value(val))
})
}
@@ -99,151 +100,123 @@ function print_config(obj, prefix = '') {
if (length(args) == 0) {
print_help()
$stop()
return
}
var config = pkg.load_config()
if (!config) {
log.error("Failed to load cell.toml")
$stop()
return
}
var command = args[0]
var key
var path
var value
var key = null
var path = null
var value = null
var value_str = null
var valid_system_keys = null
var actor_name = null
var actor_cmd = null
switch (command) {
case 'help':
case '-h':
case '--help':
print_help()
break
case 'list':
log.console("# Cell Configuration")
log.console("")
print_config(config)
break
case 'get':
if (length(args) < 2) {
log.error("Usage: cell config get <key>")
if (command == 'help' || command == '-h' || command == '--help') {
print_help()
} else if (command == 'list') {
log.console("# Cell Configuration")
log.console("")
print_config(config)
} else if (command == 'get') {
if (length(args) < 2) {
log.error("Usage: cell config get <key>")
$stop()
}
key = args[1]
path = parse_key(key)
value = get_nested(config, path)
if (value == null) {
log.error("Key not found: " + key)
} else if (is_object(value)) {
print_config(value, key)
} else {
log.console(key + ' = ' + format_value(value))
}
} else if (command == 'set') {
if (length(args) < 3) {
log.error("Usage: cell config set <key> <value>")
$stop()
}
key = args[1]
value_str = args[2]
path = parse_key(key)
value = parse_value(value_str)
if (path[0] == 'system') {
valid_system_keys = [
'ar_timer', 'actor_memory', 'net_service',
'reply_timeout', 'actor_max', 'stack_max'
]
if (find(valid_system_keys, path[1]) == null) {
log.error("Invalid system key. Valid keys: " + text(valid_system_keys, ', '))
$stop()
return
}
key = args[1]
path = parse_key(key)
value = get_nested(config, path)
if (value == null) {
log.error("Key not found: " + key)
} else if (isa(value, object)) {
// Print all nested values
print_config(value, key)
}
set_nested(config, path, value)
pkg.save_config(config)
log.console("Set " + key + " = " + format_value(value))
} else if (command == 'actor') {
if (length(args) < 3) {
log.error("Usage: cell config actor <name> <command> [options]")
$stop()
}
actor_name = args[1]
actor_cmd = args[2]
config.actors = config.actors || {}
config.actors[actor_name] = config.actors[actor_name] || {}
if (actor_cmd == 'list') {
if (length(array(config.actors[actor_name])) == 0) {
log.console("No configuration for actor: " + actor_name)
} else {
log.console(key + ' = ' + format_value(value))
log.console("# Configuration for actor: " + actor_name)
log.console("")
print_config(config.actors[actor_name], 'actors.' + actor_name)
}
break
case 'set':
if (length(args) < 3) {
log.error("Usage: cell config set <key> <value>")
} else if (actor_cmd == 'get') {
if (length(args) < 4) {
log.error("Usage: cell config actor <name> get <key>")
$stop()
return
}
var key = args[1]
var value_str = args[2]
var path = parse_key(key)
var value = parse_value(value_str)
// Validate system keys
if (path[0] == 'system') {
var valid_system_keys = [
'ar_timer', 'actor_memory', 'net_service',
'reply_timeout', 'actor_max', 'stack_max'
]
if (find(valid_system_keys, path[1]) == null) {
log.error("Invalid system key. Valid keys: " + text(valid_system_keys, ', '))
$stop()
return
}
key = args[3]
path = parse_key(key)
value = get_nested(config.actors[actor_name], path)
if (value == null) {
log.error("Key not found for actor " + actor_name + ": " + key)
} else {
log.console('actors.' + actor_name + '.' + key + ' = ' + format_value(value))
}
set_nested(config, path, value)
} else if (actor_cmd == 'set') {
if (length(args) < 5) {
log.error("Usage: cell config actor <name> set <key> <value>")
$stop()
}
key = args[3]
value_str = args[4]
path = parse_key(key)
value = parse_value(value_str)
set_nested(config.actors[actor_name], path, value)
pkg.save_config(config)
log.console("Set " + key + " = " + format_value(value))
break
case 'actor':
// Handle actor-specific configuration
if (length(args) < 3) {
log.error("Usage: cell config actor <name> <command> [options]")
$stop()
return
}
var actor_name = args[1]
var actor_cmd = args[2]
// Initialize actors section if needed
config.actors = config.actors || {}
config.actors[actor_name] = config.actors[actor_name] || {}
switch (actor_cmd) {
case 'list':
if (length(array(config.actors[actor_name])) == 0) {
log.console("No configuration for actor: " + actor_name)
} else {
log.console("# Configuration for actor: " + actor_name)
log.console("")
print_config(config.actors[actor_name], 'actors.' + actor_name)
}
break
case 'get':
if (length(args) < 4) {
log.error("Usage: cell config actor <name> get <key>")
$stop()
return
}
key = args[3]
path = parse_key(key)
value = get_nested(config.actors[actor_name], path)
if (value == null) {
log.error("Key not found for actor " + actor_name + ": " + key)
} else {
log.console('actors.' + actor_name + '.' + key + ' = ' + format_value(value))
}
break
case 'set':
if (length(args) < 5) {
log.error("Usage: cell config actor <name> set <key> <value>")
$stop()
return
}
key = args[3]
var value_str = args[4]
path = parse_key(key)
value = parse_value(value_str)
set_nested(config.actors[actor_name], path, value)
pkg.save_config(config)
log.console("Set actors." + actor_name + "." + key + " = " + format_value(value))
break
default:
log.error("Unknown actor command: " + actor_cmd)
log.console("Valid commands: list, get, set")
}
break
default:
log.error("Unknown command: " + command)
print_help()
log.console("Set actors." + actor_name + "." + key + " = " + format_value(value))
} else {
log.error("Unknown actor command: " + actor_cmd)
log.console("Valid commands: list, get, set")
}
} else {
log.error("Unknown command: " + command)
print_help()
}
$stop()
$stop()

View File

@@ -1,27 +1,15 @@
#include "cell.h"
// Return the current stack depth.
JSC_CCALL(debug_stack_depth, return number2js(js,js_debugger_stack_depth(js)))
// TODO: Reimplement stack depth for register VM
JSC_CCALL(debug_stack_depth, return number2js(js, 0))
// Return a backtrace of the current call stack.
JSC_CCALL(debug_build_backtrace, return js_debugger_build_backtrace(js,NULL))
// Return the closure variables for a given function.
JSC_CCALL(debug_closure_vars, return js_debugger_closure_variables(js,argv[0]))
JSC_CCALL(debug_set_closure_var,
js_debugger_set_closure_variable(js,argv[0],argv[1],argv[2]);
return JS_NULL;
)
// Return the local variables for a specific stack frame.
JSC_CCALL(debug_local_vars, return js_debugger_local_variables(js, js2number(js,argv[0])))
// Return metadata about a given function.
JSC_CCALL(debug_fn_info, return js_debugger_fn_info(js, argv[0]))
// Return an array of functions in the current backtrace.
JSC_CCALL(debug_backtrace_fns, return js_debugger_backtrace_fns(js,NULL))
// TODO: Reimplement debug introspection for register VM
JSC_CCALL(debug_build_backtrace, return JS_NewArray(js))
JSC_CCALL(debug_closure_vars, return JS_NewObject(js))
JSC_CCALL(debug_set_closure_var, return JS_NULL;)
JSC_CCALL(debug_local_vars, return JS_NewObject(js))
JSC_CCALL(debug_fn_info, return JS_NewObject(js))
JSC_CCALL(debug_backtrace_fns, return JS_NewArray(js))
static const JSCFunctionListEntry js_debug_funcs[] = {
MIST_FUNC_DEF(debug, stack_depth, 0),
@@ -33,8 +21,9 @@ static const JSCFunctionListEntry js_debug_funcs[] = {
MIST_FUNC_DEF(debug, backtrace_fns,0),
};
JSValue js_debug_use(JSContext *js) {
JSValue mod = JS_NewObject(js);
JS_SetPropertyFunctionList(js,mod,js_debug_funcs,countof(js_debug_funcs));
return mod;
}
JSValue js_core_debug_use(JSContext *js) {
JS_FRAME(js);
JS_ROOT(mod, JS_NewObject(js));
JS_SetPropertyFunctionList(js, mod.val, js_debug_funcs, countof(js_debug_funcs));
JS_RETURN(mod.val);
}

View File

@@ -1,106 +1,28 @@
#include "cell.h"
JSC_CCALL(os_mem_limit, JS_SetMemoryLimit(JS_GetRuntime(js), js2number(js,argv[0])))
JSC_CCALL(os_max_stacksize, JS_SetMaxStackSize(JS_GetRuntime(js), js2number(js,argv[0])))
JSC_CCALL(os_max_stacksize, JS_SetMaxStackSize(js, js2number(js,argv[0])))
// Compute the approximate size of a single JS value in memory.
// TODO: Reimplement memory usage reporting for new allocator
JSC_CCALL(os_calc_mem,
JSMemoryUsage mu;
JS_ComputeMemoryUsage(JS_GetRuntime(js),&mu);
ret = JS_NewObject(js);
JS_SetPropertyStr(js,ret,"malloc_size",number2js(js,mu.malloc_size));
JS_SetPropertyStr(js,ret,"malloc_limit",number2js(js,mu.malloc_limit));
JS_SetPropertyStr(js,ret,"memory_used_size",number2js(js,mu.memory_used_size));
JS_SetPropertyStr(js,ret,"malloc_count",number2js(js,mu.malloc_count));
JS_SetPropertyStr(js,ret,"memory_used_count",number2js(js,mu.memory_used_count));
/* atom_count and atom_size removed - atoms are now just strings */
JS_SetPropertyStr(js,ret,"str_count",number2js(js,mu.str_count));
JS_SetPropertyStr(js,ret,"str_size",number2js(js,mu.str_size));
JS_SetPropertyStr(js,ret,"obj_count",number2js(js,mu.obj_count));
JS_SetPropertyStr(js,ret,"obj_size",number2js(js,mu.obj_size));
JS_SetPropertyStr(js,ret,"prop_count",number2js(js,mu.prop_count));
JS_SetPropertyStr(js,ret,"prop_size",number2js(js,mu.prop_size));
JS_SetPropertyStr(js,ret,"shape_count",number2js(js,mu.shape_count));
JS_SetPropertyStr(js,ret,"shape_size",number2js(js,mu.shape_size));
JS_SetPropertyStr(js,ret,"js_func_count",number2js(js,mu.js_func_count));
JS_SetPropertyStr(js,ret,"js_func_size",number2js(js,mu.js_func_size));
JS_SetPropertyStr(js,ret,"js_func_code_size",number2js(js,mu.js_func_code_size));
JS_SetPropertyStr(js,ret,"js_func_pc2line_count",number2js(js,mu.js_func_pc2line_count));
JS_SetPropertyStr(js,ret,"js_func_pc2line_size",number2js(js,mu.js_func_pc2line_size));
JS_SetPropertyStr(js,ret,"c_func_count",number2js(js,mu.c_func_count));
JS_SetPropertyStr(js,ret,"array_count",number2js(js,mu.array_count));
JS_SetPropertyStr(js,ret,"fast_array_count",number2js(js,mu.fast_array_count));
JS_SetPropertyStr(js,ret,"fast_array_elements",number2js(js,mu.fast_array_elements));
JS_SetPropertyStr(js,ret,"binary_object_count",number2js(js,mu.binary_object_count));
JS_SetPropertyStr(js,ret,"binary_object_size",number2js(js,mu.binary_object_size));
)
// Evaluate a string of JavaScript code in the current QuickJS context.
JSC_SSCALL(os_eval,
if (!str2) return JS_ThrowReferenceError(js, "Second argument should be the script.");
if (!str) return JS_ThrowReferenceError(js, "First argument should be the name of the script.");
ret = JS_Eval(js,str2,strlen(str2),str, 0);
)
// Compile a string of JavaScript code into a function object.
JSC_SSCALL(js_compile,
if (!str2) return JS_ThrowReferenceError(js, "Second argument should be the script.");
if (!str) return JS_ThrowReferenceError(js, "First argument should be the name of the script.");
ret = JS_Eval(js, str2, strlen(str2), str, JS_EVAL_FLAG_COMPILE_ONLY | JS_EVAL_FLAG_BACKTRACE_BARRIER);
)
// Evaluate a function object in the current QuickJS context.
JSC_CCALL(js_eval_compile,
JS_DupValue(js,argv[0]);
ret = JS_EvalFunction(js, argv[0]);
)
// Compile a function object into a bytecode blob.
JSC_CCALL(js_compile_blob,
size_t size;
uint8_t *data = JS_WriteObject(js, &size, argv[0], JS_WRITE_OBJ_BYTECODE);
if (!data) {
return JS_ThrowInternalError(js, "Failed to serialize bytecode");
}
ret = js_new_blob_stoned_copy(js, data, size);
js_free(js, data);
)
// Compile a bytecode blob into a function object.
JSC_CCALL(js_compile_unblob,
size_t size;
void *data = js_get_blob_data(js, &size, argv[0]);
if (data == -1) return JS_EXCEPTION;
if (!data) return JS_ThrowReferenceError(js, "No data present in blob.");
return JS_ReadObject(js, data, size, JS_READ_OBJ_BYTECODE);
)
// Disassemble a function object into a string.
JSC_CCALL(js_disassemble,
return js_debugger_fn_bytecode(js, argv[0]);
)
// Return metadata about a given function.
JSC_CCALL(js_fn_info,
return js_debugger_fn_info(js, argv[0]);
)
// TODO: Reimplement for register VM
JSC_CCALL(js_disassemble, return JS_NewArray(js);)
JSC_CCALL(js_fn_info, return JS_NewObject(js);)
static const JSCFunctionListEntry js_js_funcs[] = {
MIST_FUNC_DEF(os, calc_mem, 0),
MIST_FUNC_DEF(os, mem_limit, 1),
MIST_FUNC_DEF(os, max_stacksize, 1),
MIST_FUNC_DEF(os, eval, 2),
MIST_FUNC_DEF(js, compile, 2),
MIST_FUNC_DEF(js, eval_compile, 1),
MIST_FUNC_DEF(js, compile_blob, 1),
MIST_FUNC_DEF(js, compile_unblob, 1),
MIST_FUNC_DEF(js, disassemble, 1),
MIST_FUNC_DEF(js, fn_info, 1),
};
JSValue js_js_use(JSContext *js) {
JSValue mod = JS_NewObject(js);
JS_SetPropertyFunctionList(js,mod,js_js_funcs,countof(js_js_funcs));
return mod;
}
JSValue js_core_js_use(JSContext *js) {
JS_FRAME(js);
JS_ROOT(mod, JS_NewObject(js));
JS_SetPropertyFunctionList(js, mod.val, js_js_funcs, countof(js_js_funcs));
JS_RETURN(mod.val);
}

223
diff.ce Normal file
View File

@@ -0,0 +1,223 @@
// diff.ce — differential testing: run tests optimized vs unoptimized, compare results
//
// Usage:
// cell diff - diff all test files in current package
// cell diff suite - diff a specific test file (tests/suite.cm)
// cell diff tests/foo - diff a specific test file by path
var shop = use('internal/shop')
var pkg = use('package')
var fd = use('fd')
var time = use('time')
var testlib = use('internal/testlib')
var _args = args == null ? [] : args
var analyze = use('internal/os').analyze
var run_ast_fn = use('internal/os').run_ast_fn
var run_ast_noopt_fn = use('internal/os').run_ast_noopt_fn
if (!run_ast_noopt_fn) {
log.console("error: run_ast_noopt_fn not available (rebuild bootstrap)")
$stop()
return
}
// Parse arguments: diff [test_path]
var target_test = null
if (length(_args) > 0) {
target_test = _args[0]
}
var is_valid_package = testlib.is_valid_package
if (!is_valid_package('.')) {
log.console('No cell.toml found in current directory')
$stop()
return
}
// Collect test files
function collect_tests(specific_test) {
var files = pkg.list_files(null)
var test_files = []
var i = 0
var f = null
var test_name = null
var match_name = null
var match_base = null
for (i = 0; i < length(files); i++) {
f = files[i]
if (starts_with(f, "tests/") && ends_with(f, ".cm")) {
if (specific_test) {
test_name = text(f, 0, -3)
match_name = specific_test
if (!starts_with(match_name, 'tests/')) match_name = 'tests/' + match_name
match_base = ends_with(match_name, '.cm') ? text(match_name, 0, -3) : match_name
if (test_name != match_base) continue
}
push(test_files, f)
}
}
return test_files
}
var values_equal = testlib.values_equal
var describe = testlib.describe
// Run a single test file through both paths
function diff_test_file(file_path) {
var mod_path = text(file_path, 0, -3)
var src_path = fd.realpath('.') + '/' + file_path
var src = null
var ast = null
var mod_opt = null
var mod_noopt = null
var results = {file: file_path, tests: [], passed: 0, failed: 0, errors: []}
var use_pkg = fd.realpath('.')
var opt_error = null
var noopt_error = null
var keys = null
var i = 0
var k = null
var opt_result = null
var noopt_result = null
var opt_err = null
var noopt_err = null
var _run_one_opt = null
var _run_one_noopt = null
// Build env for module loading
var make_env = function() {
return stone({
use: function(path) {
return shop.use(path, use_pkg)
}
})
}
// Read and parse
var _read = function() {
src = text(fd.slurp(src_path))
ast = analyze(src, src_path)
} disruption {
push(results.errors, `failed to parse ${file_path}`)
return results
}
_read()
if (length(results.errors) > 0) return results
// Run optimized
var _run_opt = function() {
mod_opt = run_ast_fn(mod_path, ast, make_env())
} disruption {
opt_error = "disrupted"
}
_run_opt()
// Run unoptimized
var _run_noopt = function() {
mod_noopt = run_ast_noopt_fn(mod_path, ast, make_env())
} disruption {
noopt_error = "disrupted"
}
_run_noopt()
// Compare module-level behavior
if (opt_error != noopt_error) {
push(results.errors, `module load mismatch: opt=${opt_error != null ? opt_error : "ok"} noopt=${noopt_error != null ? noopt_error : "ok"}`)
results.failed = results.failed + 1
return results
}
if (opt_error != null) {
// Both disrupted during load — that's consistent
results.passed = results.passed + 1
push(results.tests, {name: "<module>", status: "passed"})
return results
}
// If module returns a record of functions, test each one
if (is_object(mod_opt) && is_object(mod_noopt)) {
keys = array(mod_opt)
while (i < length(keys)) {
k = keys[i]
if (is_function(mod_opt[k]) && is_function(mod_noopt[k])) {
opt_result = null
noopt_result = null
opt_err = null
noopt_err = null
_run_one_opt = function() {
opt_result = mod_opt[k]()
} disruption {
opt_err = "disrupted"
}
_run_one_opt()
_run_one_noopt = function() {
noopt_result = mod_noopt[k]()
} disruption {
noopt_err = "disrupted"
}
_run_one_noopt()
if (opt_err != noopt_err) {
push(results.tests, {name: k, status: "failed"})
push(results.errors, `${k}: disruption mismatch opt=${opt_err != null ? opt_err : "ok"} noopt=${noopt_err != null ? noopt_err : "ok"}`)
results.failed = results.failed + 1
} else if (!values_equal(opt_result, noopt_result)) {
push(results.tests, {name: k, status: "failed"})
push(results.errors, `${k}: result mismatch opt=${describe(opt_result)} noopt=${describe(noopt_result)}`)
results.failed = results.failed + 1
} else {
push(results.tests, {name: k, status: "passed"})
results.passed = results.passed + 1
}
}
i = i + 1
}
} else {
// Compare direct return values
if (!values_equal(mod_opt, mod_noopt)) {
push(results.tests, {name: "<return>", status: "failed"})
push(results.errors, `return value mismatch: opt=${describe(mod_opt)} noopt=${describe(mod_noopt)}`)
results.failed = results.failed + 1
} else {
push(results.tests, {name: "<return>", status: "passed"})
results.passed = results.passed + 1
}
}
return results
}
// Main
var test_files = collect_tests(target_test)
log.console(`Differential testing: ${text(length(test_files))} file(s)`)
var total_passed = 0
var total_failed = 0
var i = 0
var result = null
var j = 0
while (i < length(test_files)) {
result = diff_test_file(test_files[i])
log.console(` ${result.file}: ${text(result.passed)} passed, ${text(result.failed)} failed`)
j = 0
while (j < length(result.errors)) {
log.console(` MISMATCH: ${result.errors[j]}`)
j = j + 1
}
total_passed = total_passed + result.passed
total_failed = total_failed + result.failed
i = i + 1
}
log.console(`----------------------------------------`)
log.console(`Diff: ${text(total_passed)} passed, ${text(total_failed)} failed, ${text(total_passed + total_failed)} total`)
if (total_failed > 0) {
log.console(`DIFFERENTIAL FAILURES DETECTED`)
}
$stop()

310
diff_ir.ce Normal file
View File

@@ -0,0 +1,310 @@
// diff_ir.ce — mcode vs streamline diff
//
// Usage:
// cell diff_ir <file> Diff all functions
// cell diff_ir --fn <N|name> <file> Diff only one function
// cell diff_ir --summary <file> Counts only
var fd = use("fd")
var shop = use("internal/shop")
var pad_right = function(s, w) {
var r = s
while (length(r) < w) {
r = r + " "
}
return r
}
var fmt_val = function(v) {
if (is_null(v)) return "null"
if (is_number(v)) return text(v)
if (is_text(v)) return `"${v}"`
if (is_object(v)) return text(v)
if (is_logical(v)) return v ? "true" : "false"
return text(v)
}
var run = function() {
var fn_filter = null
var show_summary = false
var filename = null
var i = 0
var mcode_ir = null
var opt_ir = null
var source_text = null
var source_lines = null
var main_name = null
var fi = 0
var func = null
var opt_func = null
var fname = null
while (i < length(args)) {
if (args[i] == '--fn') {
i = i + 1
fn_filter = args[i]
} else if (args[i] == '--summary') {
show_summary = true
} else if (args[i] == '--help' || args[i] == '-h') {
log.console("Usage: cell diff_ir [--fn <N|name>] [--summary] <file>")
log.console("")
log.console(" --fn <N|name> Filter to function by index or name")
log.console(" --summary Show counts only")
return null
} else if (!starts_with(args[i], '-')) {
filename = args[i]
}
i = i + 1
}
if (!filename) {
log.console("Usage: cell diff_ir [--fn <N|name>] [--summary] <file>")
return null
}
mcode_ir = shop.mcode_file(filename)
opt_ir = shop.compile_file(filename)
source_text = text(fd.slurp(filename))
source_lines = array(source_text, "\n")
var get_source_line = function(line_num) {
if (line_num < 1 || line_num > length(source_lines)) return null
return source_lines[line_num - 1]
}
var fn_matches = function(index, name) {
var match = null
if (fn_filter == null) return true
if (index >= 0 && fn_filter == text(index)) return true
if (name != null) {
match = search(name, fn_filter)
if (match != null && match >= 0) return true
}
return false
}
var fmt_instr = function(instr) {
var op = instr[0]
var n = length(instr)
var parts = []
var j = 1
var operands = null
var line_str = null
while (j < n - 2) {
push(parts, fmt_val(instr[j]))
j = j + 1
}
operands = text(parts, ", ")
line_str = instr[n - 2] != null ? `:${text(instr[n - 2])}` : ""
return pad_right(`${pad_right(op, 15)}${operands}`, 45) + line_str
}
var classify = function(before, after) {
var bn = 0
var an = 0
var k = 0
if (is_text(after) && starts_with(after, "_nop_")) return "eliminated"
if (is_array(before) && is_array(after)) {
if (before[0] != after[0]) return "rewritten"
bn = length(before)
an = length(after)
if (bn != an) return "rewritten"
k = 1
while (k < bn - 2) {
if (before[k] != after[k]) return "rewritten"
k = k + 1
}
return "identical"
}
return "identical"
}
var total_eliminated = 0
var total_rewritten = 0
var total_funcs = 0
var diff_function = function(mcode_func, opt_func, name, index) {
var nr_args = mcode_func.nr_args != null ? mcode_func.nr_args : 0
var nr_slots = mcode_func.nr_slots != null ? mcode_func.nr_slots : 0
var m_instrs = mcode_func.instructions
var o_instrs = opt_func.instructions
var eliminated = 0
var rewritten = 0
var mi = 0
var oi = 0
var pc = 0
var m_instr = null
var o_instr = null
var kind = null
var last_line = null
var instr_line = null
var n = 0
var src = null
var annotation = null
if (m_instrs == null) m_instrs = []
if (o_instrs == null) o_instrs = []
// First pass: count changes
mi = 0
oi = 0
while (mi < length(m_instrs) && oi < length(o_instrs)) {
m_instr = m_instrs[mi]
o_instr = o_instrs[oi]
if (is_text(m_instr)) {
mi = mi + 1
oi = oi + 1
continue
}
if (is_text(o_instr) && starts_with(o_instr, "_nop_")) {
if (is_array(m_instr)) {
eliminated = eliminated + 1
}
mi = mi + 1
oi = oi + 1
continue
}
if (is_array(m_instr) && is_array(o_instr)) {
kind = classify(m_instr, o_instr)
if (kind == "rewritten") {
rewritten = rewritten + 1
}
}
mi = mi + 1
oi = oi + 1
}
total_eliminated = total_eliminated + eliminated
total_rewritten = total_rewritten + rewritten
total_funcs = total_funcs + 1
if (show_summary) {
if (eliminated == 0 && rewritten == 0) {
log.compile(` ${pad_right(name + ":", 40)} 0 eliminated, 0 rewritten (unchanged)`)
} else {
log.compile(` ${pad_right(name + ":", 40)} ${text(eliminated)} eliminated, ${text(rewritten)} rewritten`)
}
return null
}
if (eliminated == 0 && rewritten == 0) return null
log.compile(`\n=== ${name} (args=${text(nr_args)}, slots=${text(nr_slots)}) ===`)
log.compile(` ${text(eliminated)} eliminated, ${text(rewritten)} rewritten`)
// Second pass: show diffs
mi = 0
oi = 0
pc = 0
last_line = null
while (mi < length(m_instrs) && oi < length(o_instrs)) {
m_instr = m_instrs[mi]
o_instr = o_instrs[oi]
if (is_text(m_instr) && !starts_with(m_instr, "_nop_")) {
mi = mi + 1
oi = oi + 1
continue
}
if (is_text(m_instr) && starts_with(m_instr, "_nop_")) {
mi = mi + 1
oi = oi + 1
continue
}
if (is_text(o_instr) && starts_with(o_instr, "_nop_")) {
if (is_array(m_instr)) {
n = length(m_instr)
instr_line = m_instr[n - 2]
if (instr_line != last_line && instr_line != null) {
src = get_source_line(instr_line)
if (src != null) src = trim(src)
if (last_line != null) log.compile("")
if (src != null && length(src) > 0) {
log.compile(` --- line ${text(instr_line)}: ${src} ---`)
}
last_line = instr_line
}
log.compile(` - ${pad_right(text(pc), 6)}${fmt_instr(m_instr)}`)
log.compile(` + ${pad_right(text(pc), 6)}${pad_right(o_instr, 45)} (eliminated)`)
}
mi = mi + 1
oi = oi + 1
pc = pc + 1
continue
}
if (is_array(m_instr) && is_array(o_instr)) {
kind = classify(m_instr, o_instr)
if (kind != "identical") {
n = length(m_instr)
instr_line = m_instr[n - 2]
if (instr_line != last_line && instr_line != null) {
src = get_source_line(instr_line)
if (src != null) src = trim(src)
if (last_line != null) log.compile("")
if (src != null && length(src) > 0) {
log.compile(` --- line ${text(instr_line)}: ${src} ---`)
}
last_line = instr_line
}
annotation = ""
if (kind == "rewritten") {
if (o_instr[0] == "concat" && m_instr[0] != "concat") {
annotation = "(specialized)"
} else {
annotation = "(rewritten)"
}
}
log.compile(` - ${pad_right(text(pc), 6)}${fmt_instr(m_instr)}`)
log.compile(` + ${pad_right(text(pc), 6)}${fmt_instr(o_instr)} ${annotation}`)
}
pc = pc + 1
}
mi = mi + 1
oi = oi + 1
}
return null
}
// Process functions
main_name = mcode_ir.name != null ? mcode_ir.name : "<main>"
if (mcode_ir.main != null && opt_ir.main != null) {
if (fn_matches(-1, main_name)) {
diff_function(mcode_ir.main, opt_ir.main, main_name, -1)
}
}
if (mcode_ir.functions != null && opt_ir.functions != null) {
fi = 0
while (fi < length(mcode_ir.functions) && fi < length(opt_ir.functions)) {
func = mcode_ir.functions[fi]
opt_func = opt_ir.functions[fi]
fname = func.name != null ? func.name : "<anonymous>"
if (fn_matches(fi, fname)) {
diff_function(func, opt_func, `[${text(fi)}] ${fname}`, fi)
}
fi = fi + 1
}
}
if (show_summary) {
log.compile(`\n total: ${text(total_eliminated)} eliminated, ${text(total_rewritten)} rewritten across ${text(total_funcs)} functions`)
}
return null
}
run()
$stop()

265
disasm.ce Normal file
View File

@@ -0,0 +1,265 @@
// disasm.ce — source-interleaved disassembly
//
// Usage:
// cell disasm <file> Disassemble all functions (mcode)
// cell disasm --optimized <file> Disassemble optimized IR (streamline)
// cell disasm --fn <N|name> <file> Show only function N or named function
// cell disasm --line <N> <file> Show instructions from source line N
var fd = use("fd")
var shop = use("internal/shop")
var pad_right = function(s, w) {
var r = s
while (length(r) < w) {
r = r + " "
}
return r
}
var fmt_val = function(v) {
if (is_null(v)) return "null"
if (is_number(v)) return text(v)
if (is_text(v)) return `"${v}"`
if (is_object(v)) return text(v)
if (is_logical(v)) return v ? "true" : "false"
return text(v)
}
var run = function() {
var use_optimized = false
var fn_filter = null
var line_filter = null
var filename = null
var i = 0
var compiled = null
var source_text = null
var source_lines = null
var main_name = null
var fi = 0
var func = null
var fname = null
while (i < length(args)) {
if (args[i] == '--optimized') {
use_optimized = true
} else if (args[i] == '--fn') {
i = i + 1
fn_filter = args[i]
} else if (args[i] == '--line') {
i = i + 1
line_filter = number(args[i])
} else if (args[i] == '--help' || args[i] == '-h') {
log.console("Usage: cell disasm [--optimized] [--fn <N|name>] [--line <N>] <file>")
log.console("")
log.console(" --optimized Use optimized IR (streamline) instead of raw mcode")
log.console(" --fn <N|name> Filter to function by index or name")
log.console(" --line <N> Show only instructions from source line N")
return null
} else if (!starts_with(args[i], '-')) {
filename = args[i]
}
i = i + 1
}
if (!filename) {
log.console("Usage: cell disasm [--optimized] [--fn <N|name>] [--line <N>] <file>")
return null
}
// Compile
if (use_optimized) {
compiled = shop.compile_file(filename)
} else {
compiled = shop.mcode_file(filename)
}
// Read source file
source_text = text(fd.slurp(filename))
source_lines = array(source_text, "\n")
// Helpers
var get_source_line = function(line_num) {
if (line_num < 1 || line_num > length(source_lines)) return null
return source_lines[line_num - 1]
}
var first_instr_line = function(func) {
var instrs = func.instructions
var i = 0
var n = 0
if (instrs == null) return null
while (i < length(instrs)) {
if (is_array(instrs[i])) {
n = length(instrs[i])
return instrs[i][n - 2]
}
i = i + 1
}
return null
}
var func_has_line = function(func, target) {
var instrs = func.instructions
var i = 0
var n = 0
if (instrs == null) return false
while (i < length(instrs)) {
if (is_array(instrs[i])) {
n = length(instrs[i])
if (instrs[i][n - 2] == target) return true
}
i = i + 1
}
return false
}
var fn_matches = function(index, name) {
var match = null
if (fn_filter == null) return true
if (index >= 0 && fn_filter == text(index)) return true
if (name != null) {
match = search(name, fn_filter)
if (match != null && match >= 0) return true
}
return false
}
var func_name_by_index = function(fi) {
var f = null
if (compiled.functions == null) return null
if (fi < 0 || fi >= length(compiled.functions)) return null
f = compiled.functions[fi]
return f.name
}
var dump_function = function(func, name, index) {
var nr_args = func.nr_args != null ? func.nr_args : 0
var nr_slots = func.nr_slots != null ? func.nr_slots : 0
var nr_close = func.nr_close_slots != null ? func.nr_close_slots : 0
var instrs = func.instructions
var start_line = first_instr_line(func)
var header = null
var i = 0
var pc = 0
var instr = null
var op = null
var n = 0
var parts = null
var j = 0
var operands = null
var instr_line = null
var last_line = null
var src = null
var line_str = null
var instr_text = null
var target_name = null
header = `\n=== ${name} (args=${text(nr_args)}, slots=${text(nr_slots)}, closures=${text(nr_close)})`
if (start_line != null) {
header = header + ` [line ${text(start_line)}]`
}
header = header + " ==="
log.compile(header)
if (instrs == null || length(instrs) == 0) {
log.compile(" (empty)")
return null
}
while (i < length(instrs)) {
instr = instrs[i]
if (is_text(instr)) {
if (!starts_with(instr, "_nop_") && line_filter == null) {
log.compile(` ${instr}:`)
}
} else if (is_array(instr)) {
op = instr[0]
n = length(instr)
instr_line = instr[n - 2]
if (line_filter != null && instr_line != line_filter) {
pc = pc + 1
i = i + 1
continue
}
if (instr_line != last_line && instr_line != null) {
src = get_source_line(instr_line)
if (src != null) {
src = trim(src)
}
if (last_line != null) {
log.compile("")
}
if (src != null && length(src) > 0) {
log.compile(` --- line ${text(instr_line)}: ${src} ---`)
} else {
log.compile(` --- line ${text(instr_line)} ---`)
}
last_line = instr_line
}
parts = []
j = 1
while (j < n - 2) {
push(parts, fmt_val(instr[j]))
j = j + 1
}
operands = text(parts, ", ")
line_str = instr_line != null ? `:${text(instr_line)}` : ""
instr_text = ` ${pad_right(text(pc), 6)}${pad_right(op, 15)}${operands}`
// Cross-reference for function creation instructions
target_name = null
if (op == "function" && n >= 5) {
target_name = func_name_by_index(instr[2])
}
if (target_name != null) {
instr_text = pad_right(instr_text, 65) + line_str + ` ; -> [${text(instr[2])}] ${target_name}`
} else {
instr_text = pad_right(instr_text, 65) + line_str
}
log.compile(instr_text)
pc = pc + 1
}
i = i + 1
}
return null
}
// Process functions
main_name = compiled.name != null ? compiled.name : "<main>"
if (compiled.main != null) {
if (fn_matches(-1, main_name)) {
if (line_filter == null || func_has_line(compiled.main, line_filter)) {
dump_function(compiled.main, main_name, -1)
}
}
}
if (compiled.functions != null) {
fi = 0
while (fi < length(compiled.functions)) {
func = compiled.functions[fi]
fname = func.name != null ? func.name : "<anonymous>"
if (fn_matches(fi, fname)) {
if (line_filter == null || func_has_line(func, line_filter)) {
dump_function(func, `[${text(fi)}] ${fname}`, fi)
}
}
fi = fi + 1
}
}
return null
}
run()
$stop()

View File

@@ -1,9 +0,0 @@
nav:
- index.md
- cellscript.md
- actors.md
- packages.md
- cli.md
- c-modules.md
- Standard Library: library

94
docs/_index.md Normal file
View File

@@ -0,0 +1,94 @@
---
title: "Documentation"
description: "ƿit language documentation"
type: "docs"
---
![image](/images/wizard.png)
ƿit is an actor-based scripting language for building concurrent applications. It combines a familiar C-like syntax with the actor model of computation, optimized for low memory usage and simplicity.
## Key Features
- **Actor Model** — isolated memory, message passing, no shared state
- **Immutability** — `stone()` makes values permanently frozen
- **Prototype Inheritance** — objects without classes
- **C Integration** — seamlessly extend with native code
- **Cross-Platform** — deploy to desktop, web, and embedded
## Quick Start
```javascript
// hello.ce - A simple actor
print("Hello, ƿit!")
$stop()
```
```bash
pit hello
```
## Language
- [**ƿit Language**](/docs/language/) — syntax, types, and operators
- [**Actors and Modules**](/docs/actors/) — the execution model
- [**Requestors**](/docs/requestors/) — asynchronous composition
- [**Packages**](/docs/packages/) — code organization and sharing
- [**Shop Architecture**](/docs/shop/) — module resolution, compilation, and caching
## Reference
- [**Built-in Functions**](/docs/functions/) — intrinsics reference
- [text](/docs/library/text/) — text conversion and manipulation
- [number](/docs/library/number/) — numeric conversion and operations
- [array](/docs/library/array/) — array creation and manipulation
- [object](/docs/library/object/) — object creation, prototypes, and serialization
## Standard Library
Modules loaded with `use()`:
- [blob](/docs/library/blob/) — binary data
- [time](/docs/library/time/) — time and dates
- [math](/docs/library/math/) — trigonometry and math
- [json](/docs/library/json/) — JSON encoding/decoding
- [random](/docs/library/random/) — random numbers
## Tools
- [**Command Line**](/docs/cli/) — the `pit` tool
- [**Semantic Index**](/docs/semantic-index/) — index and query symbols, references, and call sites
- [**Testing**](/docs/testing/) — writing and running tests
- [**Compiler Inspection**](/docs/compiler-tools/) — dump AST, mcode, and optimizer reports
- [**Writing C Modules**](/docs/c-modules/) — native extensions
## Architecture
ƿit programs are organized into **packages**. Each package contains:
- **Modules** (`.cm`) — return a value, cached and frozen
- **Actors** (`.ce`) — run independently, communicate via messages
- **C files** (`.c`) — compiled to native libraries
Actors never share memory. They communicate by sending messages, which are automatically serialized. This makes concurrent programming safe and predictable.
## Installation
```bash
# Clone and bootstrap
git clone https://gitea.pockle.world/john/cell
cd cell
make bootstrap
```
The ƿit shop is stored at `~/.cell/`.
## Development
After making changes, recompile with:
```bash
make
```
Run `cell --help` to see all available CLI flags.

View File

@@ -1,10 +1,15 @@
# Actors and Modules
---
title: "Actors and Modules"
description: "The ƿit execution model"
weight: 20
type: "docs"
---
Cell organizes code into two types of scripts: **modules** (`.cm`) and **actors** (`.ce`).
ƿit organizes code into two types of scripts: **modules** (`.cm`) and **actors** (`.ce`).
## The Actor Model
Cell is built on the actor model of computation. Each actor:
ƿit is built on the actor model of computation. Each actor:
- Has its own **isolated memory** — actors never share state
- Runs to completion each **turn** — no preemption
@@ -21,13 +26,13 @@ A module is a script that **returns a value**. The returned value is cached and
// math_utils.cm
var math = use('math/radians')
function distance(x1, y1, x2, y2) {
var distance = function(x1, y1, x2, y2) {
var dx = x2 - x1
var dy = y2 - y1
return math.sqrt(dx * dx + dy * dy)
}
function midpoint(x1, y1, x2, y2) {
var midpoint = function(x1, y1, x2, y2) {
return {
x: (x1 + x2) / 2,
y: (y1 + y2) / 2
@@ -60,12 +65,12 @@ An actor is a script that **does not return a value**. It runs as an independent
```javascript
// worker.ce
log.console("Worker started")
print("Worker started")
$on_message = function(msg) {
log.console("Received:", msg)
// Process message...
}
$receiver(function(msg) {
print("Received:", msg)
send(msg, {status: "ok"})
})
```
**Key properties:**
@@ -78,110 +83,230 @@ $on_message = function(msg) {
Actors have access to special functions prefixed with `$`:
### $me
### $self
Reference to the current actor.
Reference to the current actor. This is a stone (immutable) actor object.
```javascript
log.console($me) // actor reference
print($self) // actor reference
print(is_actor($self)) // true
```
### $overling
Reference to the parent actor that started this actor. `null` for the root actor. Child actors are automatically coupled to their overling — if the parent dies, the child dies too.
```javascript
if ($overling != null) {
send($overling, {status: "ready"})
}
```
### $stop()
Stop the current actor.
Stop the current actor. When called with an actor argument, stops that underling (child) instead.
```javascript
$stop()
$stop() // stop self
$stop(child) // stop a child actor
```
### $send(actor, message, callback)
Send a message to another actor.
**Important:** `$stop()` does not halt execution immediately. Code after the call continues running in the current turn — it only prevents the actor from receiving future messages. Structure your code so that nothing runs after `$stop()`, or use `return` to exit the current function first.
```javascript
$send(other_actor, {type: "ping", data: 42}, function(reply) {
log.console("Got reply:", reply)
})
```
// Wrong — code after $stop() still runs
if (done) $stop()
do_more_work() // this still executes!
Messages are automatically **splatted** — flattened to plain data without prototypes.
// Right — return after $stop()
if (done) { $stop(); return }
do_more_work()
```
### $start(callback, program)
Start a new actor from a script.
Start a new child actor from a script. The callback receives lifecycle events:
- `{type: "greet", actor: <ref>}` — child started successfully
- `{type: "stop"}` — child stopped cleanly
- `{type: "disrupt", reason: ...}` — child crashed
```javascript
$start(function(new_actor) {
log.console("Started:", new_actor)
$start(function(event) {
if (event.type == 'greet') {
print("Child started:", event.actor)
send(event.actor, {task: "work"})
}
if (event.type == 'stop') {
print("Child stopped")
}
if (event.type == 'disrupt') {
print("Child crashed:", event.reason)
}
}, "worker")
```
### $delay(callback, seconds)
Schedule a callback after a delay.
Schedule a callback after a delay. Returns a cancel function that can be called to prevent the callback from firing.
```javascript
$delay(function() {
log.console("5 seconds later")
var cancel = $delay(function() {
print("5 seconds later")
}, 5)
// To cancel before it fires:
cancel()
```
### $clock(callback)
Get called every frame/tick.
Get called every frame/tick. The callback receives the current time as a number.
```javascript
$clock(function(dt) {
// Called each tick with delta time
$clock(function(t) {
// called each tick with current time
})
```
### $receiver(callback)
Set up a message receiver.
Set up a message receiver. The callback is called with the incoming message whenever another actor sends a message to this actor.
To reply to a message, call `send(message, reply_data)` — the message object contains routing information that directs the reply back to the sender.
```javascript
$receiver(function(message, reply) {
// Handle incoming message
reply({status: "ok"})
$receiver(function(message) {
// handle incoming message
send(message, {status: "ok"})
})
```
### $portal(callback, port)
Open a network port.
Open a network port to receive connections from remote actors.
```javascript
$portal(function(connection) {
// Handle new connection
// handle new connection
}, 8080)
```
### $contact(callback, record)
Connect to a remote address.
Connect to a remote actor at a given address.
```javascript
$contact(function(connection) {
// Connected
// connected
}, {host: "example.com", port: 80})
```
### $time_limit(requestor, seconds)
Wrap a requestor with a timeout.
Wrap a requestor with a timeout. Returns a new requestor that will cancel the original and call its callback with a failure if the time limit is exceeded. See [Requestors](/docs/requestors/) for details.
```javascript
$time_limit(my_requestor, 10) // 10 second timeout
var timed = $time_limit(my_requestor, 10)
timed(function(result, reason) {
// reason will explain timeout if it fires
}, initial_value)
```
### $couple(actor)
Couple the current actor to another actor. When the coupled actor dies, the current actor also dies. Coupling is automatic between a child actor and its overling (parent).
```javascript
$couple(other_actor)
```
### $unneeded(callback, seconds)
Schedule the actor for removal after a specified time. The callback fires when the time elapses.
```javascript
$unneeded(function() {
// cleanup before removal
}, 30)
```
### $connection(callback, actor, config)
Get information about the connection to another actor. For local actors, returns `{type: "local"}`. For remote actors, returns connection details including latency, bandwidth, and activity.
```javascript
$connection(function(info) {
if (info.type == "local") {
print("same machine")
} else {
print(info.latency)
}
}, other_actor, {})
```
## Runtime Functions
These functions are available in actors without the `$` prefix:
### send(actor, message, callback)
Send a message to another actor. The message must be an object record.
The optional callback receives the reply when the recipient responds.
```javascript
send(other_actor, {type: "ping"}, function(reply) {
print("Got reply:", reply)
})
```
To reply to a received message, pass the message itself as the first argument — it contains routing information:
```javascript
$receiver(function(message) {
send(message, {result: 42})
})
```
Messages are automatically flattened to plain data.
### is_actor(value)
Returns `true` if the value is an actor reference.
```javascript
if (is_actor(some_value)) {
send(some_value, {ping: true})
}
```
### log
Channel-based logging. Any `log.X(value)` writes to channel `"X"`. Three channels are conventional: `log.console(msg)`, `log.error(msg)`, `log.system(msg)` — but any name works.
Channels are routed to configurable **sinks** (console or file) defined in `.cell/log.toml`. See [Logging](/docs/logging/) for the full guide.
### use(path)
Import a module. See [Module Resolution](#module-resolution) below.
### args
Array of command-line arguments passed to the actor.
### sequence(), parallel(), race(), fallback()
Requestor composition functions. See [Requestors](/docs/requestors/) for details.
## Module Resolution
When you call `use('name')`, Cell searches:
When you call `use('name')`, ƿit searches:
1. **Current package** — files relative to package root
2. **Dependencies** — packages declared in `cell.toml`
3. **Core** — built-in Cell modules
3. **Core** — built-in ƿit modules
```javascript
// From within package 'myapp':
@@ -191,7 +316,7 @@ use('json') // core json module
use('otherlib/foo') // dependency 'otherlib', file foo.cm
```
Files starting with underscore (`_helper.cm`) are private to the package.
Files in the `internal/` directory are private to the package.
## Example: Simple Actor System
@@ -199,25 +324,32 @@ Files starting with underscore (`_helper.cm`) are private to the package.
// main.ce - Entry point
var config = use('config')
log.console("Starting application...")
print("Starting application...")
$start(function(worker) {
$send(worker, {task: "process", data: [1, 2, 3]})
$start(function(event) {
if (event.type == 'greet') {
send(event.actor, {task: "process", data: [1, 2, 3]})
}
if (event.type == 'stop') {
print("Worker finished")
$stop()
}
}, "worker")
$delay(function() {
log.console("Shutting down")
print("Shutting down")
$stop()
}, 10)
```
```javascript
// worker.ce - Worker actor
$receiver(function(msg, reply) {
$receiver(function(msg) {
if (msg.task == "process") {
var result = array(msg.data, x => x * 2)
reply({result: result})
var result = array(msg.data, function(x) { return x * 2 })
send(msg, {result: result})
}
$stop()
})
```

View File

@@ -1,6 +1,11 @@
# Writing C Modules
---
title: "Writing C Modules"
description: "Extending ƿit with native code"
weight: 50
type: "docs"
---
Cell makes it easy to extend functionality with C code. C files in a package are compiled into a dynamic library and can be imported like any other module.
ƿit makes it easy to extend functionality with C code. C files in a package are compiled into a dynamic library and can be imported like any other module.
## Basic Structure
@@ -45,12 +50,20 @@ Where:
- `<filename>` is the C file name without extension
Examples:
- `mypackage/math.c` `js_mypackage_math_use`
- `gitea.pockle.world/john/lib/render.c` `js_gitea_pockle_world_john_lib_render_use`
- `mypackage/math.c` -> `js_mypackage_math_use`
- `gitea.pockle.world/john/lib/render.c` -> `js_gitea_pockle_world_john_lib_render_use`
- `mypackage/internal/helpers.c` -> `js_mypackage_internal_helpers_use`
- `mypackage/game.ce` (AOT actor) -> `js_mypackage_game_program`
Actor files (`.ce`) use the `_program` suffix instead of `_use`.
Internal modules (in `internal/` subdirectories) follow the same convention — the `internal` directory name becomes part of the symbol. For example, `internal/os.c` in the core package has the symbol `js_core_internal_os_use`.
**Note:** Having both a `.cm` and `.c` file with the same stem at the same scope is a build error.
## Required Headers
Include `cell.h` for all Cell integration:
Include `cell.h` for all ƿit integration:
```c
#include "cell.h"
@@ -63,7 +76,7 @@ This provides:
## Conversion Functions
### JavaScript C
### JavaScript <-> C
```c
// Numbers
@@ -177,10 +190,12 @@ JSC_CCALL(vector_normalize,
double y = js2number(js, argv[1]);
double len = sqrt(x*x + y*y);
if (len > 0) {
JSValue result = JS_NewObject(js);
JS_SetPropertyStr(js, result, "x", number2js(js, x/len));
JS_SetPropertyStr(js, result, "y", number2js(js, y/len));
ret = result;
JS_FRAME(js);
JS_ROOT(result, JS_NewObject(js));
JS_SetPropertyStr(js, result.val, "x", number2js(js, x/len));
JS_SetPropertyStr(js, result.val, "y", number2js(js, y/len));
JS_RestoreFrame(_js_ctx, _js_gc_frame, _js_local_frame);
ret = result.val;
}
)
@@ -201,7 +216,7 @@ static const JSCFunctionListEntry js_funcs[] = {
CELL_USE_FUNCS(js_funcs)
```
Usage in Cell:
Usage in ƿit:
```javascript
var vector = use('vector')
@@ -211,44 +226,116 @@ var n = vector.normalize(3, 4) // {x: 0.6, y: 0.8}
var d = vector.dot(1, 0, 0, 1) // 0
```
## Combining C and Cell
A common pattern is to have a C file provide low-level functions and a `.cm` file provide a higher-level API:
```c
// _vector_native.c
// ... raw C functions ...
```
```javascript
// vector.cm
var native = this // C module passed as 'this'
function Vector(x, y) {
return {x: x, y: y}
}
Vector.length = function(v) {
return native.length(v.x, v.y)
}
Vector.normalize = function(v) {
return native.normalize(v.x, v.y)
}
return Vector
```
## Build Process
C files are automatically compiled when you run:
```bash
cell build
cell update
cell --dev build
```
The resulting dynamic library is placed in `~/.cell/lib/`.
Each C file is compiled into a per-file dynamic library at a content-addressed path in `~/.cell/build/<hash>`. A manifest is written for each package so the runtime can find dylibs without rerunning the build pipeline — see [Dylib Manifests](/docs/shop/#dylib-manifests).
## Compilation Flags (cell.toml)
Use the `[compilation]` section in `cell.toml` to pass compiler and linker flags:
```toml
[compilation]
CFLAGS = "-Isrc -Ivendor/include"
LDFLAGS = "-lz -lm"
```
### Include paths
Relative `-I` paths are resolved from the package root:
```toml
CFLAGS = "-Isdk/public"
```
If your package is at `/path/to/mypkg`, this becomes `-I/path/to/mypkg/sdk/public`.
Absolute paths are passed through unchanged.
The build system also auto-discovers `include/` directories — if your package has an `include/` directory, it is automatically added to the include path. No need to add `-I$PACKAGE/include` in cell.toml.
### Library paths
Relative `-L` paths work the same way:
```toml
LDFLAGS = "-Lsdk/lib -lmylib"
```
### Target-specific flags
Add sections named `[compilation.<target>]` for platform-specific flags:
```toml
[compilation]
CFLAGS = "-Isdk/public"
[compilation.macos_arm64]
LDFLAGS = "-Lsdk/lib/osx -lmylib"
[compilation.linux]
LDFLAGS = "-Lsdk/lib/linux64 -lmylib"
[compilation.windows]
LDFLAGS = "-Lsdk/lib/win64 -lmylib64"
```
Available targets: `macos_arm64`, `macos_x86_64`, `linux`, `linux_arm64`, `windows`.
### Sigils
Use sigils in flags to refer to standard directories:
- `$LOCAL` — absolute path to `.cell/local` (for prebuilt libraries)
- `$PACKAGE` — absolute path to the package root
```toml
CFLAGS = "-I$PACKAGE/vendor/include"
LDFLAGS = "-L$LOCAL -lmyprebuilt"
```
### Example: vendored SDK
A package wrapping an external SDK with platform-specific shared libraries:
```
mypkg/
├── cell.toml
├── wrapper.cpp
└── sdk/
├── public/
│ └── mylib/
│ └── api.h
└── lib/
├── osx/
│ └── libmylib.dylib
└── linux64/
└── libmylib.so
```
```toml
[compilation]
CFLAGS = "-Isdk/public"
[compilation.macos_arm64]
LDFLAGS = "-Lsdk/lib/osx -lmylib"
[compilation.linux]
LDFLAGS = "-Lsdk/lib/linux64 -lmylib"
```
```cpp
// wrapper.cpp
#include "cell.h"
#include <mylib/api.h>
// ...
```
## Platform-Specific Code
@@ -260,7 +347,152 @@ audio_playdate.c # Playdate
audio_emscripten.c # Web/Emscripten
```
Cell selects the appropriate file based on the target platform.
ƿit selects the appropriate file based on the target platform.
## Multi-File C Modules
If your module wraps a C library, place the library's source files in a `src/` directory. Files in `src/` are compiled as support objects and linked into your module's dylib — they are not treated as standalone modules.
```
mypackage/
rtree.c # module (exports js_mypackage_rtree_use)
src/
rtree.c # support file (linked into rtree.dylib)
rtree.h # header
```
The module file (`rtree.c`) includes the library header and uses `cell.h` as usual. The support files are plain C — they don't need any cell macros.
## GC Safety
ƿit uses a **Cheney copying garbage collector**. Any JS allocation — `JS_NewObject`, `JS_NewString`, `JS_NewInt32`, `JS_SetPropertyStr`, `js_new_blob_stoned_copy`, etc. — can trigger GC, which **moves** heap objects to new addresses. Bare C locals holding `JSValue` become **dangling pointers** after any allocating call. This is not a theoretical concern — it causes real crashes that are difficult to reproduce because they depend on heap pressure.
### Checklist (apply to EVERY C function you write or modify)
1. Count the `JS_New*`, `JS_SetProperty*`, and `js_new_blob*` calls in the function
2. If there are **2 or more**, the function **MUST** use `JS_FRAME` / `JS_ROOT` / `JS_RETURN`
3. Every `JSValue` held in a C local across an allocating call must be rooted
### When you need rooting
If a function creates **one** heap object and returns it immediately, no rooting is needed:
```c
JSC_CCALL(mymod_name,
ret = JS_NewString(js, "hello");
)
```
If a function creates an object and then sets properties on it, you **must** root it — each `JS_SetPropertyStr` call is an allocating call that can trigger GC:
```c
// UNSAFE — will crash under GC pressure:
JSValue obj = JS_NewObject(js);
JS_SetPropertyStr(js, obj, "x", JS_NewInt32(js, 1)); // can GC → obj is stale
JS_SetPropertyStr(js, obj, "y", JS_NewInt32(js, 2)); // obj may be garbage
return obj;
// SAFE:
JS_FRAME(js);
JS_ROOT(obj, JS_NewObject(js));
JS_SetPropertyStr(js, obj.val, "x", JS_NewInt32(js, 1));
JS_SetPropertyStr(js, obj.val, "y", JS_NewInt32(js, 2));
JS_RETURN(obj.val);
```
### Patterns
**Object with properties** — the most common pattern in this codebase:
```c
JS_FRAME(js);
JS_ROOT(result, JS_NewObject(js));
JS_SetPropertyStr(js, result.val, "width", JS_NewInt32(js, w));
JS_SetPropertyStr(js, result.val, "height", JS_NewInt32(js, h));
JS_SetPropertyStr(js, result.val, "pixels", js_new_blob_stoned_copy(js, data, len));
JS_RETURN(result.val);
```
**Array with loop** — root the element variable *before* the loop, then reassign `.val` each iteration:
```c
JS_FRAME(js);
JS_ROOT(arr, JS_NewArray(js));
JS_ROOT(item, JS_NULL);
for (int i = 0; i < count; i++) {
item.val = JS_NewObject(js);
JS_SetPropertyStr(js, item.val, "index", JS_NewInt32(js, i));
JS_SetPropertyStr(js, item.val, "data", js_new_blob_stoned_copy(js, ptr, sz));
JS_SetPropertyNumber(js, arr.val, i, item.val);
}
JS_RETURN(arr.val);
```
**WARNING — NEVER put `JS_ROOT` inside a loop.** `JS_ROOT` declares a `JSGCRef` local and calls `JS_PushGCRef(&name)`, which pushes its address onto a linked list. Inside a loop the compiler reuses the same stack address, so on iteration 2+ the list becomes self-referential (`ref->prev == ref`). When GC triggers it walks the chain and **hangs forever**. This bug is intermittent — it only manifests when GC happens to run during the loop — making it very hard to reproduce.
**Nested objects** — root every object that persists across an allocating call:
```c
JS_FRAME(js);
JS_ROOT(outer, JS_NewObject(js));
JS_ROOT(inner, JS_NewArray(js));
// ... populate inner ...
JS_SetPropertyStr(js, outer.val, "items", inner.val);
JS_RETURN(outer.val);
```
**Inside `JSC_CCALL`** — use `JS_RestoreFrame` and assign to `ret`:
```c
JSC_CCALL(mymod_make,
JS_FRAME(js);
JS_ROOT(obj, JS_NewObject(js));
JS_SetPropertyStr(js, obj.val, "x", number2js(js, 42));
JS_RestoreFrame(_js_ctx, _js_gc_frame, _js_local_frame);
ret = obj.val;
)
```
### Macros
| Macro | Purpose |
|-------|---------|
| `JS_FRAME(js)` | Save the GC frame. Required before any `JS_ROOT`. |
| `JS_ROOT(name, init)` | Declare a `JSGCRef` and root its value. Access via `name.val`. |
| `JS_LOCAL(name, init)` | Declare a rooted `JSValue` (GC updates it through its address). |
| `JS_RETURN(val)` | Restore the frame and return a value. |
| `JS_RETURN_NULL()` | Restore the frame and return `JS_NULL`. |
| `JS_RETURN_EX()` | Restore the frame and return `JS_EXCEPTION`. |
| `JS_RestoreFrame(...)` | Manual frame restore (for `JSC_CCALL` bodies that use `ret =`). |
### Error return rules
- Error returns **before** `JS_FRAME` can use plain `return JS_ThrowTypeError(...)` etc.
- Error returns **after** `JS_FRAME` must use `JS_RETURN_EX()` or `JS_RETURN_NULL()` — never plain `return`, which would leak the GC frame.
### Migrating from gc_mark
The old mark-and-sweep GC had a `gc_mark` callback in `JSClassDef` for C structs that held JSValue fields. This no longer exists. The copying GC needs to know the **address** of every pointer to update it when objects move.
If your C struct holds a JSValue that must survive across GC points, root it for the duration it's alive:
```c
typedef struct {
JSValue callback;
JSLocalRef callback_lr;
} MyWidget;
// When storing:
widget->callback = value;
widget->callback_lr.ptr = &widget->callback;
JS_PushLocalRef(js, &widget->callback_lr);
// When done (before freeing the struct):
// The local ref is cleaned up when the frame is restored,
// or manage it manually.
```
In practice, most C wrappers hold only opaque C pointers (like `SDL_Window*`) and never store JSValues in the struct — these need no migration.
## Static Declarations
@@ -275,3 +507,32 @@ static int module_state = 0;
```
This prevents symbol conflicts between packages.
## Troubleshooting
### Missing header / SDK not installed
If a package wraps a third-party SDK that isn't installed on your system, the build will show:
```
module.c: fatal error: 'sdk/header.h' file not found (SDK not installed?)
```
Install the required SDK or skip that package. These warnings are harmless — other packages continue building normally.
### CFLAGS not applied
If your `cell.toml` has a `[compilation]` section but flags aren't being picked up, check:
1. The TOML syntax is valid (strings must be quoted)
2. The section header is exactly `[compilation]` (not `[compile]` etc.)
3. Target-specific sections use valid target names: `macos_arm64`, `macos_x86_64`, `linux`, `linux_arm64`, `windows`
### API changes from older versions
If C modules fail with errors about function signatures:
- `JS_IsArray` takes one argument (the value), not two — remove the context argument
- Use `JS_GetPropertyNumber` / `JS_SetPropertyNumber` instead of `JS_GetPropertyUint32` / `JS_SetPropertyUint32`
- Use `JS_NewString` instead of `JS_NewAtomString`
- There is no `undefined` — use `JS_IsNull` and `JS_NULL` only

View File

@@ -1,288 +0,0 @@
# Cell Language
Cell is a scripting language for actor-based programming. It combines a familiar syntax with a prototype-based object system and strict immutability semantics.
## Basics
### Variables and Constants
```javascript
var x = 10 // mutable variable (block-scoped like let)
def PI = 3.14159 // constant (cannot be reassigned)
```
### Data Types
Cell has six fundamental types:
- **number** — DEC64 decimal floating point (no rounding errors)
- **text** — Unicode strings
- **logical** — `true` or `false`
- **null** — the absence of a value (no `undefined`)
- **array** — ordered, numerically-indexed sequences
- **object** — key-value records with prototype inheritance
- **blob** — binary data (bits, not bytes)
- **function** — first-class callable values
### Literals
```javascript
// Numbers
42
3.14
1_000_000 // underscores for readability
// Text
"hello"
'world'
`template ${x}` // string interpolation
// Logical
true
false
// Null
null
// Arrays
[1, 2, 3]
["a", "b", "c"]
// Objects
{name: "cell", version: 1}
{x: 10, y: 20}
```
### Operators
```javascript
// Arithmetic
+ - * / %
** // exponentiation
// Comparison (always strict)
== // equals (like === in JS)
!= // not equals (like !== in JS)
< > <= >=
// Logical
&& || !
// Assignment
= += -= *= /=
```
### Control Flow
```javascript
// Conditionals
if (x > 0) {
log.console("positive")
} else if (x < 0) {
log.console("negative")
} else {
log.console("zero")
}
// Ternary
var sign = x > 0 ? 1 : -1
// Loops
for (var i = 0; i < 10; i++) {
log.console(i)
}
for (var item of items) {
log.console(item)
}
for (var key in obj) {
log.console(key, obj[key])
}
while (condition) {
// body
}
// Control
break
continue
return value
throw "error message"
```
### Functions
```javascript
// Named function
function add(a, b) {
return a + b
}
// Anonymous function
var multiply = function(a, b) {
return a * b
}
// Arrow function
var square = x => x * x
var sum = (a, b) => a + b
// Rest parameters
function log_all(...args) {
for (var arg of args) log.console(arg)
}
// Default parameters
function greet(name, greeting = "Hello") {
return `${greeting}, ${name}!`
}
```
All closures capture `this` (like arrow functions in JavaScript).
## Arrays
Arrays are **distinct from objects**. They are ordered, numerically-indexed sequences. You cannot add arbitrary string keys to an array.
```javascript
var arr = [1, 2, 3]
arr[0] // 1
arr[2] = 10 // [1, 2, 10]
length(arr) // 3
// Array spread
var more = [...arr, 4, 5] // [1, 2, 10, 4, 5]
```
## Objects
Objects are key-value records with prototype-based inheritance.
```javascript
var point = {x: 10, y: 20}
point.x // 10
point["y"] // 20
// Object spread
var point3d = {...point, z: 30}
// Prototype inheritance
var colored_point = {__proto__: point, color: "red"}
colored_point.x // 10 (inherited)
```
### Prototypes
```javascript
// Create object with prototype
var child = meme(parent)
// Get prototype
var p = proto(child)
// Check prototype chain
isa(child, parent) // true
```
## Immutability with Stone
The `stone()` function makes values permanently immutable.
```javascript
var config = stone({
debug: true,
maxRetries: 3
})
config.debug = false // Error! Stone objects cannot be modified
```
Stone is **deep** — all nested objects and arrays are also frozen. This cannot be reversed.
```javascript
stone.p(value) // returns true if value is stone
```
## Built-in Functions
### length(value)
Returns the length of arrays (elements), text (codepoints), blobs (bits), or functions (arity).
```javascript
length([1, 2, 3]) // 3
length("hello") // 5
length(function(a,b){}) // 2
```
### use(path)
Import a module. Returns the cached, stone value.
```javascript
var math = use('math/radians')
var json = use('json')
```
### isa(value, type)
Check type or prototype chain.
```javascript
is_number(42) // true
is_text("hi") // true
is_array([1,2]) // true
is_object({}) // true
isa(child, parent) // true if parent is in prototype chain
```
### reverse(array)
Returns a new array with elements in reverse order.
```javascript
reverse([1, 2, 3]) // [3, 2, 1]
```
### logical(value)
Convert to boolean.
```javascript
logical(0) // false
logical(1) // true
logical("true") // true
logical("false") // false
logical(null) // false
```
## Logging
```javascript
log.console("message") // standard output
log.error("problem") // error output
```
## Pattern Matching
Cell supports regex patterns in string functions, but not standalone regex objects.
```javascript
text.search("hello world", /world/)
replace("hello", /l/g, "L")
```
## Error Handling
```javascript
try {
riskyOperation()
} catch (e) {
log.error(e)
}
throw "something went wrong"
```
If an actor has an uncaught error, it crashes.

View File

@@ -1,138 +1,549 @@
# Command Line Interface
---
title: "Command Line Interface"
description: "The pit tool"
weight: 40
type: "docs"
---
Cell provides a command-line interface for managing packages, running scripts, and building applications.
ƿit provides a command-line interface for managing packages, running scripts, and building applications.
## Basic Usage
```bash
cell <command> [arguments]
pit <command> [arguments]
```
## Commands
## General
### cell version
### pit version
Display the Cell version.
Display the ƿit version.
```bash
cell version
pit version
# 0.1.0
```
### cell install
Install a package to the shop.
```bash
cell install gitea.pockle.world/john/prosperon
cell install /Users/john/local/mypackage # local path
```
### cell update
Update packages from remote sources.
```bash
cell update # update all packages
cell update <package> # update specific package
```
### cell remove
Remove a package from the shop.
```bash
cell remove gitea.pockle.world/john/oldpackage
```
### cell list
List installed packages.
```bash
cell list # list all installed packages
cell list <package> # list dependencies of a package
```
### cell ls
List modules and actors in a package.
```bash
cell ls # list files in current project
cell ls <package> # list files in specified package
```
### cell build
Build the current package.
```bash
cell build
```
### cell test
Run tests.
```bash
cell test # run tests in current package
cell test all # run all tests
cell test <package> # run tests in specific package
```
### cell link
Manage local package links for development.
```bash
cell link add <canonical> <local_path> # link a package
cell link list # show all links
cell link delete <canonical> # remove a link
cell link clear # remove all links
```
### cell fetch
Fetch package sources without extracting.
```bash
cell fetch <package>
```
### cell upgrade
Upgrade the Cell installation itself.
```bash
cell upgrade
```
### cell clean
Clean build artifacts.
```bash
cell clean
```
### cell help
### pit help
Display help information.
```bash
cell help
cell help <command>
pit help
pit help <command>
```
## Running Scripts
## Package Commands
Any `.ce` file in the Cell core can be run as a command:
These commands operate on a package's `cell.toml`, source files, or build artifacts.
### pit add
Add a dependency to the current package. Installs the package to the shop, builds any C modules, and updates `cell.toml`.
```bash
cell version # runs version.ce
cell build # runs build.ce
cell test # runs test.ce
pit add gitea.pockle.world/john/prosperon # remote, default alias
pit add gitea.pockle.world/john/prosperon myalias # remote, custom alias
pit add /Users/john/work/mylib # local path (symlinked)
pit add . # current directory
pit add ../sibling-package # relative path
```
For local paths, the package is symlinked into the shop rather than copied. Changes to the source directory are immediately visible.
### pit build
Build C modules for a package. Compiles each C file into a per-file dynamic library stored in the content-addressed build cache at `~/.cell/build/<hash>`. A per-package manifest is written so the runtime can find dylibs by package name. C files in `src/` directories are compiled as support objects and linked into the module dylibs. Files that previously failed to compile are skipped automatically (cached failure markers); they are retried when the source or compiler flags change.
```bash
pit build # build all packages
pit build <package> # build specific package
pit build /Users/john/work/mylib # build local package
pit build . # build current directory
pit build -t macos_arm64 # cross-compile for target
pit build -b debug # build type: release (default), debug, minsize
pit build --list-targets # list available targets
pit build --force # force rebuild
pit build --dry-run # show what would be built
pit build --verbose # print resolved flags, commands, cache status
```
### pit test
Run tests. See [Testing](/docs/testing/) for the full guide.
```bash
pit test # run tests in current package
pit test suite # run specific test file
pit test all # run all tests in current package
pit test package <name> # run tests in a named package
pit test package /Users/john/work/mylib # run tests for a local package
pit test package all # run tests from all packages
pit test suite --verify --diff # with IR verification and differential testing
```
### pit ls
List modules and actors in a package.
```bash
pit ls # list files in current project
pit ls <package> # list files in specified package
```
### pit audit
Test-compile all `.ce` and `.cm` scripts in package(s). Continues past failures and reports all errors at the end.
```bash
pit audit # audit all installed packages
pit audit <package> # audit specific package
pit audit . # audit current directory
```
### pit resolve
Print the fully resolved dependency closure for a package.
```bash
pit resolve # resolve current package
pit resolve <package> # resolve specific package
pit resolve --locked # show lock state without links
```
### pit graph
Emit a dependency graph.
```bash
pit graph # tree of current package
pit graph --format dot # graphviz dot output
pit graph --format json # json output
pit graph --world # graph all installed packages
pit graph --locked # show lock view without links
```
### pit verify
Verify integrity and consistency of packages, links, and builds.
```bash
pit verify # verify current package
pit verify shop # verify entire shop
pit verify --deep # traverse full dependency closure
pit verify --target <triple>
```
### pit pack
Build a statically linked binary from a package and all its dependencies.
```bash
pit pack <package> # build static binary (output: app)
pit pack <package> -o myapp # specify output name
pit pack <package> -t <triple> # cross-compile for target
```
### pit config
Manage system and actor configuration values in `cell.toml`.
```bash
pit config list # list all config
pit config get system.ar_timer # get a value
pit config set system.ar_timer 5.0 # set a value
pit config actor <name> list # list actor config
pit config actor <name> get <key> # get actor config
pit config actor <name> set <key> <val> # set actor config
```
### pit bench
Run benchmarks with statistical analysis. Benchmark files are `.cm` modules in a package's `benches/` directory.
```bash
pit bench # run all benchmarks in current package
pit bench all # same as above
pit bench <suite> # run specific benchmark file
pit bench package <name> # benchmark a named package
pit bench package <name> <suite> # specific benchmark in a package
pit bench package all # benchmark all packages
pit bench --bytecode <suite> # force bytecode-only benchmark run
pit bench --native <suite> # force native-only benchmark run
pit bench --compare <suite> # run bytecode and native side-by-side
```
Output includes median, mean, standard deviation, and percentiles for each benchmark.
## Shop Commands
These commands operate on the global shop (`~/.cell/`) or system-level state.
### pit install
Install a package to the shop.
```bash
pit install gitea.pockle.world/john/prosperon
pit install /Users/john/local/mypackage # local path
```
### pit remove
Remove a package from the shop. Removes the lock entry, the package directory (or symlink), and any built dylibs.
```bash
pit remove gitea.pockle.world/john/oldpackage
pit remove /Users/john/work/mylib # local path
pit remove . # current directory
pit remove mypackage --dry-run # show what would be removed
pit remove mypackage --prune # also remove orphaned dependencies
```
Options:
- `--prune` — also remove packages that are no longer needed by any remaining root
- `--dry-run` — show what would be removed without removing anything
### pit update
Update packages from remote sources.
```bash
pit update # update all packages
pit update <package> # update specific package
```
### pit list
List installed packages.
```bash
pit list # list all installed packages
pit list <package> # list dependencies of a package
```
### pit link
Manage local package links for development.
```bash
pit link add <canonical> <local_path> # link a package
pit link list # show all links
pit link delete <canonical> # remove a link
pit link clear # remove all links
```
### pit unlink
Remove a link created by `pit link` or `pit clone` and restore the original package.
```bash
pit unlink gitea.pockle.world/john/prosperon
```
### pit clone
Clone a package to a local path and link it for development.
```bash
pit clone gitea.pockle.world/john/prosperon ./prosperon
```
### pit fetch
Fetch package sources without extracting.
```bash
pit fetch <package>
```
### pit search
Search for packages, actors, or modules matching a query.
```bash
pit search math
```
### pit why
Show which installed packages depend on a given package (reverse dependency lookup).
```bash
pit why gitea.pockle.world/john/prosperon
```
### pit upgrade
Upgrade the ƿit installation itself.
```bash
pit upgrade
```
### pit clean
Clean build artifacts.
```bash
pit clean
```
## Logging
### pit log
Manage log sinks and read log files. See [Logging](/docs/logging/) for the full guide.
### pit log list
List configured sinks.
```bash
pit log list
```
### pit log add
Add a log sink.
```bash
pit log add <name> console [options] # add a console sink
pit log add <name> file <path> [options] # add a file sink
```
Options:
- `--format=pretty|bare|json` — output format (default: `pretty` for console, `json` for file)
- `--channels=ch1,ch2` — channels to subscribe (default: `console,error,system`). Use `'*'` for all channels (quote to prevent shell glob expansion).
- `--exclude=ch1,ch2` — channels to exclude (useful with `'*'`)
- `--stack=ch1,ch2` — channels that capture a full stack trace (default: `error`)
```bash
pit log add terminal console --format=bare --channels=console
pit log add errors file .cell/logs/errors.jsonl --channels=error
pit log add dump file .cell/logs/dump.jsonl '--channels=*' --exclude=console
pit log add debug console --channels=error,debug --stack=error,debug
```
### pit log remove
Remove a sink.
```bash
pit log remove <name>
```
### pit log read
Read entries from a file sink.
```bash
pit log read <sink> [options]
```
Options:
- `--lines=N` — show last N entries
- `--channel=X` — filter by channel
- `--since=timestamp` — only show entries after timestamp (seconds since epoch)
```bash
pit log read errors --lines=50
pit log read dump --channel=debug --lines=10
pit log read errors --since=1702656000
```
### pit log tail
Follow a file sink in real time.
```bash
pit log tail <sink> [--lines=N]
```
`--lines=N` controls how many existing entries to show on start (default: 10).
```bash
pit log tail dump
pit log tail errors --lines=20
```
## Developer Commands
Compiler pipeline tools, analysis, and testing. These are primarily useful for developing the ƿit compiler and runtime.
### Compiler Pipeline
Each of these commands runs the compilation pipeline up to a specific stage and prints the intermediate output. They take a source file as input.
### pit tokenize
Tokenize a source file and output the token stream as JSON.
```bash
pit tokenize <file.cm>
```
### pit parse
Parse a source file and output the AST as JSON.
```bash
pit parse <file.cm>
```
### pit fold
Run constant folding and semantic analysis on a source file and output the simplified AST as JSON.
```bash
pit fold <file.cm>
```
### pit mcode
Compile a source file to mcode (machine-independent intermediate representation) and output as JSON.
```bash
pit mcode <file.cm>
```
### pit streamline
Apply the full optimization pipeline to a source file and output optimized mcode as JSON.
```bash
pit streamline <file.cm> # full optimized IR as JSON (default)
pit streamline --stats <file.cm> # summary stats per function
pit streamline --ir <file.cm> # human-readable IR
pit streamline --check <file.cm> # warnings only (e.g. high slot count)
```
Flags can be combined. `--stats` output includes function name, args, slots, instruction counts by category, and nops eliminated. `--check` warns when `nr_slots > 200` (approaching the 255 limit).
### pit qbe
Compile a source file to QBE intermediate language (for native code generation).
```bash
pit qbe <file.cm>
```
### pit compile
Compile a source file to a native dynamic library.
```bash
pit compile <file.cm> # outputs .dylib to ~/.cell/build/
pit compile <file.ce>
```
### pit run_native
Compile a module natively and compare execution against interpreted mode, showing timing differences.
```bash
pit run_native <module> # compare interpreted vs native
pit run_native <module> <test_arg> # pass argument to module function
```
### pit run_aot
Ahead-of-time compile and execute a program natively.
```bash
pit run_aot <program.ce>
```
### pit seed
Regenerate the boot seed files in `boot/`. Seeds are pre-compiled mcode IR (JSON) that bootstrap the compilation pipeline on cold start. They only need regenerating when the pipeline source changes in a way the existing seeds can't compile, or before distribution.
```bash
pit seed # regenerate all boot seeds
pit seed --clean # also clear the build cache after
```
The engine recompiles pipeline modules automatically when source changes (via content-addressed cache). Seeds are a fallback for cold start when no cache exists.
### Analysis
### pit explain
Query the semantic index for symbol information at a specific source location or by name.
```bash
pit explain --span <file:line:col> # find symbol at position
pit explain --symbol <name> [files...] # find symbol by name
pit explain --help # show usage
```
### pit index
Build a semantic index for a source file and output symbol information as JSON.
```bash
pit index <file.cm> # output to stdout
pit index <file.cm> -o index.json # output to file
pit index --help # show usage
```
### pit ir_report
Optimizer flight recorder — capture detailed information about IR transformations during optimization.
```bash
pit ir_report <file.cm> # per-pass JSON summaries (default)
pit ir_report <file.cm> --events # include rewrite events
pit ir_report <file.cm> --types # include type deltas
pit ir_report <file.cm> --ir-before=PASS # print IR before specific pass
pit ir_report <file.cm> --ir-after=PASS # print IR after specific pass
pit ir_report <file.cm> --ir-all # print IR before/after every pass
pit ir_report <file.cm> --full # all options combined
```
Output is NDJSON (newline-delimited JSON).
### Testing
### pit diff
Differential testing — run tests with and without optimizations and compare results.
```bash
pit diff # diff all test files in current package
pit diff <suite> # diff specific test file
pit diff tests/<path> # diff by path
```
### pit fuzz
Random program fuzzer — generates random programs and checks for optimization correctness by comparing optimized vs unoptimized execution.
```bash
pit fuzz # 100 iterations with random seed
pit fuzz <iterations> # specific number of iterations
pit fuzz --seed <N> # start at specific seed
pit fuzz <iterations> --seed <N>
```
Failures are saved to `tests/fuzz_failures/`.
### pit vm_suite
Run the VM stability test suite (641 tests covering arithmetic, strings, control flow, closures, objects, and more).
```bash
pit vm_suite
```
### pit syntax_suite
Run the syntax feature test suite (covers all literal types, operators, control flow, functions, prototypes, and more).
```bash
pit syntax_suite
```
## Package Locators
@@ -143,19 +554,18 @@ Packages are identified by locators:
- **Local**: `/absolute/path/to/package`
```bash
cell install gitea.pockle.world/john/prosperon
cell install /Users/john/work/mylib
pit install gitea.pockle.world/john/prosperon
pit install /Users/john/work/mylib
```
## Configuration
Cell stores its data in `~/.cell/`:
ƿit stores its data in `~/.cell/`:
```
~/.cell/
├── packages/ # installed packages
├── lib/ # compiled dynamic libraries
├── build/ # build cache
├── packages/ # installed package sources
├── build/ # content-addressed cache (bytecode, dylibs, manifests)
├── cache/ # downloaded archives
├── lock.toml # installed package versions
└── link.toml # local development links
@@ -163,7 +573,7 @@ Cell stores its data in `~/.cell/`:
## Environment
Cell reads the `HOME` environment variable to locate the shop directory.
ƿit reads the `HOME` environment variable to locate the shop directory.
## Exit Codes

495
docs/compiler-tools.md Normal file
View File

@@ -0,0 +1,495 @@
---
title: "Compiler Inspection Tools"
description: "Tools for inspecting and debugging the compiler pipeline"
weight: 50
type: "docs"
---
ƿit includes a set of tools for inspecting the compiler pipeline at every stage. These are useful for debugging, testing optimizations, and understanding what the compiler does with your code.
## Pipeline Overview
The compiler runs in stages:
```
source → tokenize → parse → fold → mcode → streamline → output
```
Each stage has a corresponding CLI tool that lets you see its output.
| Stage | Tool | What it shows |
|-------------|---------------------------|----------------------------------------|
| tokenize | `tokenize.ce` | Token stream as JSON |
| parse | `parse.ce` | Unfolded AST as JSON |
| fold | `fold.ce` | Folded AST as JSON |
| mcode | `mcode.ce` | Raw mcode IR as JSON |
| mcode | `mcode.ce --pretty` | Human-readable mcode IR |
| streamline | `streamline.ce` | Full optimized IR as JSON |
| streamline | `streamline.ce --types` | Optimized IR with type annotations |
| streamline | `streamline.ce --stats` | Per-function summary stats |
| streamline | `streamline.ce --ir` | Human-readable canonical IR |
| disasm | `disasm.ce` | Source-interleaved disassembly |
| disasm | `disasm.ce --optimized` | Optimized source-interleaved disassembly |
| diff | `diff_ir.ce` | Mcode vs streamline instruction diff |
| xref | `xref.ce` | Cross-reference / call creation graph |
| cfg | `cfg.ce` | Control flow graph (basic blocks) |
| slots | `slots.ce` | Slot data flow / use-def chains |
| all | `ir_report.ce` | Structured optimizer flight recorder |
All tools take a source file as input and run the pipeline up to the relevant stage.
## Quick Start
```bash
# see raw mcode IR (pretty-printed)
cell mcode --pretty myfile.ce
# source-interleaved disassembly
cell disasm myfile.ce
# see optimized IR with type annotations
cell streamline --types myfile.ce
# full optimizer report with events
cell ir_report --full myfile.ce
```
## fold.ce
Prints the folded AST as JSON. This is the output of the parser and constant folder, before mcode generation.
```bash
cell fold <file.ce|file.cm>
```
## mcode.ce
Prints mcode IR. Default output is JSON; use `--pretty` for human-readable format with opcodes, operands, and program counter.
```bash
cell mcode <file.ce|file.cm> # JSON (default)
cell mcode --pretty <file.ce|file.cm> # human-readable IR
```
## streamline.ce
Runs the full pipeline (tokenize, parse, fold, mcode, streamline) and outputs the optimized IR as JSON. Useful for piping to `jq` or saving for comparison.
```bash
cell streamline <file.ce|file.cm> # full JSON (default)
cell streamline --stats <file.ce|file.cm> # summary stats per function
cell streamline --ir <file.ce|file.cm> # human-readable IR
cell streamline --check <file.ce|file.cm> # warnings only
cell streamline --types <file.ce|file.cm> # IR with type annotations
cell streamline --diagnose <file.ce|file.cm> # compile-time diagnostics
```
| Flag | Description |
|------|-------------|
| (none) | Full optimized IR as JSON (backward compatible) |
| `--stats` | Per-function summary: args, slots, instruction counts by category, nops eliminated |
| `--ir` | Human-readable canonical IR (same format as `ir_report.ce`) |
| `--check` | Warnings only (e.g. `nr_slots > 200` approaching 255 limit) |
| `--types` | Optimized IR with inferred type annotations per slot |
| `--diagnose` | Run compile-time diagnostics (type errors and warnings) |
Flags can be combined.
## disasm.ce
Source-interleaved disassembly. Shows mcode or optimized IR with source lines interleaved, making it easy to see which instructions were generated from which source code.
```bash
cell disasm <file> # disassemble all functions (mcode)
cell disasm --optimized <file> # disassemble optimized IR (streamline)
cell disasm --fn 87 <file> # show only function 87
cell disasm --fn my_func <file> # show only functions named "my_func"
cell disasm --line 235 <file> # show instructions generated from line 235
```
| Flag | Description |
|------|-------------|
| (none) | Raw mcode IR with source interleaving (default) |
| `--optimized` | Use optimized IR (streamline) instead of raw mcode |
| `--fn <N\|name>` | Filter to specific function by index or name substring |
| `--line <N>` | Show only instructions generated from a specific source line |
### Output Format
Functions are shown with a header including argument count, slot count, and the source line where the function begins. Instructions are grouped by source line, with the source text shown before each group:
```
=== [87] <anonymous> (args=0, slots=12, closures=0) [line 234] ===
--- line 235: var result = compute(x, y) ---
0 access 2, "compute" :235
1 get 3, 1, 0 :235
2 get 4, 1, 1 :235
3 invoke 3, 2, 2 :235
--- line 236: if (result > 0) { ---
4 access 5, 0 :236
5 gt 6, 4, 5 :236
6 jump_false 6, "else_1" :236
```
Each instruction line shows:
- Program counter (left-aligned)
- Opcode
- Operands (comma-separated)
- Source line number (`:N` suffix, right-aligned)
Function creation instructions include a cross-reference annotation showing the target function's name:
```
3 function 5, 12 :235 ; -> [12] helper_fn
```
## diff_ir.ce
Compares mcode IR (before optimization) with streamline IR (after optimization), showing what the optimizer changed. Useful for understanding which instructions were eliminated, specialized, or rewritten.
```bash
cell diff_ir <file> # diff all functions
cell diff_ir --fn <N|name> <file> # diff only one function
cell diff_ir --summary <file> # counts only
```
| Flag | Description |
|------|-------------|
| (none) | Show all diffs with source interleaving |
| `--fn <N\|name>` | Filter to specific function by index or name |
| `--summary` | Show only eliminated/rewritten counts per function |
### Output Format
Changed instructions are shown in diff style with `-` (before) and `+` (after) lines:
```
=== [0] <anonymous> (args=1, slots=40) ===
17 eliminated, 51 rewritten
--- line 4: if (n <= 1) { ---
- 1 is_int 4, 1 :4
+ 1 is_int 3, 1 :4 (specialized)
- 3 is_int 5, 2 :4
+ 3 _nop_tc_1 (eliminated)
```
Summary mode gives a quick overview:
```
[0] <anonymous>: 17 eliminated, 51 rewritten
[1] <anonymous>: 65 eliminated, 181 rewritten
total: 86 eliminated, 250 rewritten across 4 functions
```
## xref.ce
Cross-reference / call graph tool. Shows which functions create other functions (via `function` instructions), building a creation tree.
```bash
cell xref <file> # full creation tree
cell xref --callers <N> <file> # who creates function [N]?
cell xref --callees <N> <file> # what does [N] create/call?
cell xref --dot <file> # DOT graph for graphviz
cell xref --optimized <file> # use optimized IR
```
| Flag | Description |
|------|-------------|
| (none) | Indented creation tree from main |
| `--callers <N>` | Show which functions create function [N] |
| `--callees <N>` | Show what function [N] creates (use -1 for main) |
| `--dot` | Output DOT format for graphviz |
| `--optimized` | Use optimized IR instead of raw mcode |
### Output Format
Default tree view:
```
demo_disasm.cm
[0] <anonymous>
[1] <anonymous>
[2] <anonymous>
```
Caller/callee query:
```
Callers of [0] <anonymous>:
demo_disasm.cm at line 3
```
DOT output can be piped to graphviz: `cell xref --dot file.cm | dot -Tpng -o xref.png`
## cfg.ce
Control flow graph tool. Identifies basic blocks from labels and jumps, computes edges, and detects loop back-edges.
```bash
cell cfg --fn <N|name> <file> # text CFG for function
cell cfg --dot --fn <N|name> <file> # DOT output for graphviz
cell cfg <file> # text CFG for all functions
cell cfg --optimized <file> # use optimized IR
```
| Flag | Description |
|------|-------------|
| `--fn <N\|name>` | Filter to specific function by index or name |
| `--dot` | Output DOT format for graphviz |
| `--optimized` | Use optimized IR instead of raw mcode |
### Output Format
```
=== [0] <anonymous> ===
B0 [pc 0-2, line 4]:
0 access 2, 1
1 is_int 4, 1
2 jump_false 4, "rel_ni_2"
-> B3 "rel_ni_2" (jump)
-> B1 (fallthrough)
B1 [pc 3-4, line 4]:
3 is_int 5, 2
4 jump_false 5, "rel_ni_2"
-> B3 "rel_ni_2" (jump)
-> B2 (fallthrough)
```
Each block shows its ID, PC range, source lines, instructions, and outgoing edges. Loop back-edges (target PC <= source PC) are annotated.
## slots.ce
Slot data flow analysis. Builds use-def chains for every slot in a function, showing where each slot is defined and used. Optionally captures type information from streamline.
```bash
cell slots --fn <N|name> <file> # slot summary for function
cell slots --slot <N> --fn <N|name> <file> # trace slot N
cell slots <file> # slot summary for all functions
```
| Flag | Description |
|------|-------------|
| `--fn <N\|name>` | Filter to specific function by index or name |
| `--slot <N>` | Show chronological DEF/USE trace for a specific slot |
### Output Format
Summary shows each slot with its def count, use count, inferred type, and first definition. Dead slots (defined but never used) are flagged:
```
=== [0] <anonymous> (args=1, slots=40) ===
slot defs uses type first-def
s0 0 0 - (this)
s1 0 10 - (arg 0)
s2 1 6 - pc 0: access
s10 1 0 - pc 29: invoke <- dead
```
Slot trace (`--slot N`) shows every DEF and USE in program order:
```
=== slot 3 in [0] <anonymous> ===
DEF pc 5: le_int 3, 1, 2 :4
DEF pc 11: le_float 3, 1, 2 :4
DEF pc 17: le_text 3, 1, 2 :4
USE pc 31: jump_false 3, "if_else_0" :4
```
## seed.ce
Regenerates the boot seed files in `boot/`. These are pre-compiled mcode IR (JSON) files that bootstrap the compilation pipeline on cold start.
```bash
cell seed # regenerate all boot seeds
cell seed --clean # also clear the build cache after
```
The script compiles each pipeline module (tokenize, parse, fold, mcode, streamline) and `internal/bootstrap.cm` through the current pipeline, encodes the output as JSON, and writes it to `boot/<name>.cm.mcode`.
**When to regenerate seeds:**
- Before a release or distribution
- When the pipeline source changes in a way the existing seeds can't compile the new source (e.g. language-level changes)
- Seeds do NOT need regenerating for normal development — the engine recompiles pipeline modules from source automatically via the content-addressed cache
## ir_report.ce
The optimizer flight recorder. Runs the full pipeline with structured logging and outputs machine-readable, diff-friendly JSON. This is the most detailed tool for understanding what the optimizer did and why.
```bash
cell ir_report [options] <file.ce|file.cm>
```
### Options
| Flag | Description |
|------|-------------|
| `--summary` | Per-pass JSON summaries with instruction counts and timing (default) |
| `--events` | Include rewrite events showing each optimization applied |
| `--types` | Include type delta records showing inferred slot types |
| `--ir-before=PASS` | Print canonical IR before a specific pass |
| `--ir-after=PASS` | Print canonical IR after a specific pass |
| `--ir-all` | Print canonical IR before and after all passes |
| `--full` | Everything: summary + events + types + ir-all |
With no flags, `--summary` is the default.
### Output Format
Output is line-delimited JSON. Each line is a self-contained JSON object with a `type` field:
**`type: "pass"`** — Per-pass summary with categorized instruction counts before and after:
```json
{
"type": "pass",
"pass": "eliminate_type_checks",
"fn": "fib",
"ms": 0.12,
"changed": true,
"before": {"instr": 77, "nop": 0, "guard": 16, "branch": 28, ...},
"after": {"instr": 77, "nop": 1, "guard": 15, "branch": 28, ...},
"changes": {"guards_removed": 1, "nops_added": 1}
}
```
**`type: "event"`** — Individual rewrite event with before/after instructions and reasoning:
```json
{
"type": "event",
"pass": "eliminate_type_checks",
"rule": "incompatible_type_forces_jump",
"at": 3,
"before": [["is_int", 5, 2, 4, 9], ["jump_false", 5, "rel_ni_2", 4, 9]],
"after": ["_nop_tc_1", ["jump", "rel_ni_2", 4, 9]],
"why": {"slot": 2, "known_type": "float", "checked_type": "int"}
}
```
**`type: "types"`** — Inferred type information for a function:
```json
{
"type": "types",
"fn": "fib",
"param_types": {},
"slot_types": {"25": "null"}
}
```
**`type: "ir"`** — Canonical IR text for a function at a specific point:
```json
{
"type": "ir",
"when": "before",
"pass": "all",
"fn": "fib",
"text": "fn fib (args=1, slots=26)\n @0 access s2, 2\n ..."
}
```
### Rewrite Rules
Each pass records events with named rules:
**eliminate_type_checks:**
- `known_type_eliminates_guard` — type already known, guard removed
- `incompatible_type_forces_jump` — type conflicts, conditional jump becomes unconditional
- `num_subsumes_int_float` — num check satisfied by int or float
- `dynamic_to_field` — load_dynamic/store_dynamic narrowed to field access
- `dynamic_to_index` — load_dynamic/store_dynamic narrowed to index access
**simplify_algebra:**
- `add_zero`, `sub_zero`, `mul_one`, `div_one` — identity operations become moves
- `mul_zero` — multiplication by zero becomes constant
- `self_eq`, `self_ne` — same-slot comparisons become constants
**simplify_booleans:**
- `not_jump_false_fusion` — not + jump_false fused into jump_true
- `not_jump_true_fusion` — not + jump_true fused into jump_false
- `double_not` — not + not collapsed to move
**eliminate_moves:**
- `self_move` — move to same slot becomes nop
**eliminate_dead_jumps:**
- `jump_to_next` — jump to immediately following label becomes nop
### Canonical IR Format
The `--ir-all`, `--ir-before`, and `--ir-after` flags produce a deterministic text representation of the IR:
```
fn fib (args=1, slots=26)
@0 access s2, 2
@1 is_int s4, s1 ; [guard]
@2 jump_false s4, "rel_ni_2" ; [branch]
@3 --- nop (tc) ---
@4 jump "rel_ni_2" ; [branch]
@5 lt_int s3, s1, s2
@6 jump "rel_done_4" ; [branch]
rel_ni_2:
@8 is_num s4, s1 ; [guard]
```
Properties:
- `@N` is the raw array index, stable across passes (passes replace, never insert or delete)
- `sN` prefix distinguishes slot operands from literal values
- String operands are quoted
- Labels appear as indented headers with a colon
- Category tags in brackets: `[guard]`, `[branch]`, `[load]`, `[store]`, `[call]`, `[arith]`, `[move]`, `[const]`
- Nops shown as `--- nop (reason) ---` with reason codes: `tc` (type check), `bl` (boolean), `mv` (move), `dj` (dead jump), `ur` (unreachable)
### Examples
```bash
# what passes changed something?
cell ir_report --summary myfile.ce | jq 'select(.changed)'
# list all rewrite rules that fired
cell ir_report --events myfile.ce | jq 'select(.type == "event") | .rule'
# diff IR before and after optimization
cell ir_report --ir-all myfile.ce | jq -r 'select(.type == "ir") | .text'
# full report for analysis
cell ir_report --full myfile.ce > report.json
```
## ir_stats.cm
A utility module used by `ir_report.ce` and available for custom tooling. Not a standalone tool.
```javascript
var ir_stats = use("ir_stats")
ir_stats.detailed_stats(func) // categorized instruction counts
ir_stats.ir_fingerprint(func) // djb2 hash of instruction array
ir_stats.canonical_ir(func, name, opts) // deterministic text representation
ir_stats.type_snapshot(slot_types) // frozen copy of type map
ir_stats.type_delta(before_types, after_types) // compute type changes
ir_stats.category_tag(op) // classify an opcode
```
### Instruction Categories
`detailed_stats` classifies each instruction into one of these categories:
| Category | Opcodes |
|----------|---------|
| load | `load_field`, `load_index`, `load_dynamic`, `get`, `access` (non-constant) |
| store | `store_field`, `store_index`, `store_dynamic`, `set_var`, `put`, `push` |
| branch | `jump`, `jump_true`, `jump_false`, `jump_not_null` |
| call | `invoke`, `goinvoke` |
| guard | `is_int`, `is_text`, `is_num`, `is_bool`, `is_null`, `is_array`, `is_func`, `is_record`, `is_stone` |
| arith | `add_int`, `sub_int`, ..., `add_float`, ..., `concat`, `neg_int`, `neg_float`, bitwise ops |
| move | `move` |
| const | `int`, `true`, `false`, `null`, `access` (with constant value) |
| label | string entries that are not nops |
| nop | strings starting with `_nop_` |
| other | everything else (`frame`, `setarg`, `array`, `record`, `function`, `return`, etc.) |

File diff suppressed because it is too large Load Diff

View File

@@ -1,66 +0,0 @@
# Cell
![image](wizard.png)
Cell is an actor-based scripting language for building concurrent applications. It combines a familiar C-like syntax with the actor model of computation, optimized for low memory usage and simplicity.
## Key Features
- **Actor Model** — isolated memory, message passing, no shared state
- **Immutability** — `stone()` makes values permanently frozen
- **Prototype Inheritance** — objects without classes
- **C Integration** — seamlessly extend with native code
- **Cross-Platform** — deploy to desktop, web, and embedded
## Quick Start
```javascript
// hello.ce - A simple actor
log.console("Hello, Cell!")
$stop()
```
```bash
cell hello
```
## Documentation
- [**Cell Language**](cellscript.md) — syntax, types, and built-in functions
- [**Actors and Modules**](actors.md) — the execution model
- [**Packages**](packages.md) — code organization and sharing
- [**Command Line**](cli.md) — the `cell` tool
- [**Writing C Modules**](c-modules.md) — native extensions
## Standard Library
- [text](library/text.md) — string manipulation
- [number](library/number.md) — numeric operations (functions are global: `floor()`, `max()`, etc.)
- [array](library/array.md) — array utilities
- [object](library/object.md) — object utilities
- [blob](library/blob.md) — binary data
- [time](library/time.md) — time and dates
- [math](library/math.md) — trigonometry and math
- [json](library/json.md) — JSON encoding/decoding
- [random](library/random.md) — random numbers
## Architecture
Cell programs are organized into **packages**. Each package contains:
- **Modules** (`.cm`) — return a value, cached and frozen
- **Actors** (`.ce`) — run independently, communicate via messages
- **C files** (`.c`) — compiled to native libraries
Actors never share memory. They communicate by sending messages, which are automatically serialized. This makes concurrent programming safe and predictable.
## Installation
```bash
# Clone and bootstrap
git clone https://gitea.pockle.world/john/cell
cd cell
make bootstrap
```
The Cell shop is stored at `~/.cell/`.

94
docs/kim.md Normal file
View File

@@ -0,0 +1,94 @@
---
title: "Kim Encoding"
description: "Compact character and count encoding"
weight: 80
type: "docs"
---
Kim is a character and count encoding designed by Douglas Crockford. It encodes Unicode characters and variable-length integers using continuation bytes. Kim is simpler and more compact than UTF-8 for most text.
## Continuation Bytes
The fundamental idea in Kim is the continuation byte:
```
C D D D D D D D
```
- **C** — continue bit. If 1, read another byte. If 0, this is the last byte.
- **D** (7 bits) — data bits.
To decode: shift the accumulator left by 7 bits, add the 7 data bits. If the continue bit is 1, repeat with the next byte. If 0, the value is complete.
To encode: take the value, emit 7 bits at a time from most significant to least significant, setting the continue bit on all bytes except the last.
## Character Encoding
Kim encodes Unicode codepoints directly as continuation byte sequences:
| Range | Bytes | Characters |
|-------|-------|------------|
| U+0000 to U+007F | 1 | ASCII |
| U+0080 to U+3FFF | 2 | First quarter of BMP |
| U+4000 to U+10FFFF | 3 | All other Unicode |
Unlike UTF-8, there is no need for surrogate pairs or escapement. Every Unicode character, including emoji and characters from extended planes, is encoded in at most 3 bytes.
### Examples
```
'A' (U+0041) → 41
'é' (U+00E9) → 81 69
'💩' (U+1F4A9) → 87 E9 29
```
## Count Encoding
Kim is also used for encoding counts (lengths, sizes). The same continuation byte format represents non-negative integers of arbitrary size:
| Range | Bytes |
|-------|-------|
| 0 to 127 | 1 |
| 128 to 16383 | 2 |
| 16384 to 2097151 | 3 |
## Comparison with UTF-8
| Property | Kim | UTF-8 |
|----------|-----|-------|
| ASCII | 1 byte | 1 byte |
| BMP (first quarter) | 2 bytes | 2-3 bytes |
| Full Unicode | 3 bytes | 3-4 bytes |
| Self-synchronizing | No | Yes |
| Sortable | No | Yes |
| Simpler to implement | Yes | No |
| Byte count for counts | Variable (7 bits/byte) | Not applicable |
Kim trades self-synchronization (the ability to find character boundaries from any position) for simplicity and compactness. In practice, Kim text is accessed sequentially, so self-synchronization is not needed.
## Usage in ƿit
Kim is used internally by blobs and by the Nota message format.
### In Blobs
The `blob.write_text` and `blob.read_text` functions use Kim to encode text into binary data:
```javascript
var blob = use('blob')
var b = blob.make()
blob.write_text(b, "hello") // Kim-encoded length + characters
stone(b)
var text = blob.read_text(b, 0) // "hello"
```
### In Nota
Nota uses Kim for two purposes:
1. **Counts** — array lengths, text lengths, blob sizes, record pair counts
2. **Characters** — text content within Nota messages
The preamble byte of each Nota value incorporates the first few bits of a Kim-encoded count, with the continue bit indicating whether more bytes follow.
See [Nota Format](#nota) for the full specification.

657
docs/language.md Normal file
View File

@@ -0,0 +1,657 @@
---
title: "ƿit Language"
description: "Syntax, types, operators, and built-in functions"
weight: 10
type: "docs"
---
ƿit is a scripting language for actor-based programming. It combines a familiar syntax with a prototype-based object system and strict immutability semantics.
## Basics
### Variables and Constants
Variables are declared with `var`, constants with `def`. All declarations must be initialized and must appear at the function body level — not inside `if`, `while`, `for`, or `do` blocks.
```javascript
var x = 10
var name = "pit"
var empty = null
def PI = 3.14159 // constant, cannot be reassigned
var a = 1, b = 2, c = 3 // multiple declarations
```
### Data Types
ƿit has eight fundamental types:
- **number** — DEC64 decimal floating point (no rounding errors)
- **text** — Unicode strings
- **logical** — `true` or `false`
- **null** — the absence of a value (no `undefined`)
- **array** — ordered, numerically-indexed sequences
- **object** — key-value records with prototype inheritance
- **blob** — binary data (bits, not bytes)
- **function** — first-class callable values
### Literals
```javascript
// Numbers
42
3.14
-5
0
1e3 // scientific notation (1000)
// Text
"hello"
`template ${x}` // string interpolation
`${1 + 2}` // expression interpolation
// Logical
true
false
// Null
null
// Arrays
[1, 2, 3]
[]
// Objects
{a: 1, b: "two"}
{}
// Regex
/\d+/
/hello/i // with flags
```
## Operators
### Arithmetic
```javascript
2 + 3 // 5
5 - 3 // 2
3 * 4 // 12
12 / 4 // 3
10 % 3 // 1
2 ** 3 // 8 (exponentiation)
```
### Comparison
All comparisons are strict — there is no type coercion.
```javascript
5 == 5 // true
5 != 6 // true
3 < 5 // true
5 > 3 // true
3 <= 3 // true
5 >= 5 // true
```
### Logical
```javascript
true && true // true
true && false // false
false || true // true
false || false // false
!true // false
!false // true
```
Logical operators short-circuit:
```javascript
var called = false
var fn = function() { called = true; return true }
var r = false && fn() // fn() not called
r = true || fn() // fn() not called
```
### Bitwise
```javascript
5 & 3 // 1 (AND)
5 | 3 // 7 (OR)
5 ^ 3 // 6 (XOR)
~0 // -1 (NOT)
1 << 3 // 8 (left shift)
8 >> 3 // 1 (right shift)
-1 >>> 1 // 2147483647 (unsigned right shift)
```
### Unary
```javascript
+5 // 5
-5 // -5
-(-5) // 5
```
### Increment and Decrement
```javascript
var x = 5
x++ // returns 5, x becomes 6 (postfix)
++x // returns 7, x becomes 7 (prefix)
x-- // returns 7, x becomes 6 (postfix)
--x // returns 5, x becomes 5 (prefix)
```
### Compound Assignment
```javascript
var x = 10
x += 3 // 13
x -= 3 // 10
x *= 2 // 20
x /= 4 // 5
x %= 3 // 2
```
### Ternary
```javascript
var a = true ? 1 : 2 // 1
var b = false ? 1 : 2 // 2
var c = true ? (false ? 1 : 2) : 3 // 2 (nested)
```
### Comma
The comma operator evaluates all expressions and returns the last.
```javascript
var x = (1, 2, 3) // 3
```
### In
Test whether a key exists in an object.
```javascript
var o = {a: 1}
"a" in o // true
"b" in o // false
```
### Delete
Remove a key from an object.
```javascript
var o = {a: 1, b: 2}
delete o.a
"a" in o // false
o.b // 2
```
## Property Access
### Dot and Bracket
```javascript
var o = {x: 10}
o.x // 10 (dot read)
o.x = 20 // dot write
o["x"] // 20 (bracket read)
var key = "x"
o[key] // 20 (computed bracket)
o["y"] = 30 // bracket write
```
### Object as Key
Objects can be used as keys in other objects.
```javascript
var k = {}
var o = {}
o[k] = 42
o[k] // 42
o[{}] // null (different object)
k in o // true
delete o[k]
k in o // false
```
### Chained Access
```javascript
var d = {a: {b: [1, {c: 99}]}}
d.a.b[1].c // 99
```
## Arrays
Arrays are **distinct from objects**. They are ordered, numerically-indexed sequences.
```javascript
var arr = [1, 2, 3]
arr[0] // 1
arr[2] = 10 // [1, 2, 10]
length(arr) // 3
```
### Push and Pop
```javascript
var a = [1, 2]
a[] = 3 // push: [1, 2, 3]
length(a) // 3
var v = a[] // pop: v is 3, a is [1, 2]
length(a) // 2
```
## Objects
Objects are key-value records with prototype-based inheritance.
```javascript
var point = {x: 10, y: 20}
point.x // 10
point["y"] // 20
```
### Prototypes
```javascript
// Create object with prototype
var parent = {x: 10}
var child = meme(parent)
child.x // 10 (inherited)
proto(child) // parent
// Override does not mutate parent
child.x = 20
parent.x // 10
```
### Mixins
```javascript
var p = {a: 1}
var m1 = {b: 2}
var m2 = {c: 3}
var child = meme(p, [m1, m2])
child.a // 1 (from prototype)
child.b // 2 (from mixin)
child.c // 3 (from mixin)
```
## Control Flow
### If / Else
```javascript
var x = 0
if (true) x = 1
if (false) x = 2 else x = 3
if (false) x = 4
else if (true) x = 5
else x = 6
```
### While
```javascript
var i = 0
while (i < 5) i++
// break
i = 0
while (true) {
if (i >= 3) break
i++
}
// continue
var sum = 0
i = 0
while (i < 5) {
i++
if (i % 2 == 0) continue
sum += i
}
```
### For
Variables cannot be declared in the for initializer. Declare them at the function body level.
```javascript
var sum = 0
var i = 0
for (i = 0; i < 5; i++) sum += i
// break
sum = 0
i = 0
for (i = 0; i < 10; i++) {
if (i == 5) break
sum += i
}
// continue
sum = 0
i = 0
for (i = 0; i < 5; i++) {
if (i % 2 == 0) continue
sum += i
}
// nested
sum = 0
var j = 0
for (i = 0; i < 3; i++) {
for (j = 0; j < 3; j++) {
sum++
}
}
```
## Functions
### Function Expressions
```javascript
var add = function(a, b) { return a + b }
add(2, 3) // 5
```
### Arrow Functions
```javascript
var double = x => x * 2
double(5) // 10
var sum = (a, b) => a + b
sum(2, 3) // 5
var block = x => {
var y = x * 2
return y + 1
}
block(5) // 11
```
### Return
A function with no `return` returns `null`. An early `return` exits immediately.
```javascript
var fn = function() { var x = 1 }
fn() // null
var fn2 = function() { return 1; return 2 }
fn2() // 1
```
### Arguments
Functions can have at most **4 parameters**. Use a record to pass more values.
Extra arguments are ignored. Missing arguments are `null`.
```javascript
var fn = function(a, b) { return a + b }
fn(1, 2, 3) // 3 (extra arg ignored)
var fn2 = function(a, b) { return a }
fn2(1) // 1 (b is null)
// More than 4 parameters — use a record
var draw = function(shape, opts) {
// opts.x, opts.y, opts.color, ...
}
```
### Immediately Invoked Function Expression
```javascript
var r = (function(x) { return x * 2 })(21) // 42
```
### Closures
Functions capture variables from their enclosing scope.
```javascript
var make = function(x) {
return function(y) { return x + y }
}
var add5 = make(5)
add5(3) // 8
```
Captured variables can be mutated:
```javascript
var counter = function() {
var n = 0
return function() { n = n + 1; return n }
}
var c = counter()
c() // 1
c() // 2
```
### Recursion
```javascript
var fact = function(n) {
if (n <= 1) return 1
return n * fact(n - 1)
}
fact(5) // 120
```
### This Binding
When a function is called as a method, `this` refers to the object.
```javascript
var obj = {
val: 10,
get: function() { return this.val }
}
obj.get() // 10
```
### Currying
```javascript
var f = function(a) {
return function(b) {
return function(c) { return a + b + c }
}
}
f(1)(2)(3) // 6
```
## Identifiers
Identifiers can contain `?` and `!` characters, both as suffixes and mid-name.
```javascript
var nil? = (x) => x == null
nil?(null) // true
nil?(42) // false
var set! = (x) => x + 1
set!(5) // 6
var is?valid = (x) => x > 0
is?valid(3) // true
var do!stuff = () => 42
do!stuff() // 42
```
The `?` in an identifier is not confused with the ternary operator:
```javascript
var nil? = (x) => x == null
var a = nil?(null) ? "yes" : "no" // "yes"
```
## Type Checking
### Type Functions
```javascript
is_number(42) // true
is_text("hi") // true
is_logical(true) // true
is_object({}) // true (records only)
is_array([]) // true
is_function(function(){}) // true
is_null(null) // true
is_object([]) // false (arrays are not records)
is_object("hello") // false (text is not a record)
is_array({}) // false (records are not arrays)
```
### Truthiness
Falsy values: `false`, `0`, `""`, `null`. Everything else is truthy.
```javascript
if (0) ... // not entered
if ("") ... // not entered
if (null) ... // not entered
if (1) ... // entered
if ("hi") ... // entered
if ({}) ... // entered
if ([]) ... // entered
```
## Immutability with Stone
The `stone()` function makes values permanently immutable.
```javascript
var o = {x: 1}
is_stone(o) // false
stone(o)
is_stone(o) // true
o.x = 2 // disrupts!
```
Stone is **deep** — all nested objects and arrays are also frozen. This cannot be reversed.
## Function Proxy
A function with two parameters (`name`, `args`) acts as a proxy when properties are accessed on it. Any method call on the function dispatches through the proxy.
```javascript
var proxy = function(name, args) {
return `${name}:${length(args)}`
}
proxy.hello() // "hello:0"
proxy.add(1, 2) // "add:2"
proxy["method"]() // "method:0"
var m = "dynamic"
proxy[m]() // "dynamic:0"
```
For non-proxy functions, property access disrupts:
```javascript
var fn = function() { return 1 }
fn.foo // disrupts
fn.foo = 1 // disrupts
```
## Regex
Regex literals are written with forward slashes, with optional flags.
```javascript
var r = /\d+/
var result = extract("abc123", r)
result[0] // "123"
var ri = /hello/i
var result2 = extract("Hello", ri)
result2[0] // "Hello"
```
## Error Handling
ƿit uses `disrupt` and `disruption` for error handling. A `disrupt` signals that something went wrong. The `disruption` block attached to a function catches it.
```javascript
var safe_divide = function(a, b) {
if (b == 0) disrupt
return a / b
} disruption {
log.error("something went wrong")
}
```
`disrupt` is a bare keyword — it does not carry a value. The `disruption` block knows that something went wrong, but not what.
### Re-raising
A `disruption` block can re-raise by calling `disrupt` again:
```javascript
var outer = function() {
var inner = function() { disrupt } disruption { disrupt }
inner()
} disruption {
// caught here after re-raise
}
outer()
```
### Testing for Disruption
```javascript
var should_disrupt = function(fn) {
var caught = false
var wrapper = function() {
fn()
} disruption {
caught = true
}
wrapper()
return caught
}
```
If an actor has an unhandled disruption, it crashes.
## Self-Referencing Structures
Objects can reference themselves:
```javascript
var o = {name: "root"}
o.self = o
o.self.self.name // "root"
```
## Variable Shadowing
Inner functions can shadow outer variables:
```javascript
var x = 10
var fn = function() {
var x = 20
return x
}
fn() // 20
x // 10
```

View File

@@ -1,10 +0,0 @@
nav:
- text.md
- number.md
- array.md
- object.md
- blob.md
- time.md
- math.md
- json.md
- random.md

18
docs/library/_index.md Normal file
View File

@@ -0,0 +1,18 @@
---
title: "Standard Library"
description: "ƿit standard library modules"
weight: 90
type: "docs"
---
The standard library provides modules loaded with `use()`.
| Module | Description |
|--------|-------------|
| [blob](/docs/library/blob/) | Binary data (bits, not bytes) |
| [time](/docs/library/time/) | Time constants and conversions |
| [math](/docs/library/math/) | Trigonometry, logarithms, roots |
| [json](/docs/library/json/) | JSON encoding and decoding |
| [random](/docs/library/random/) | Random number generation |
The `text`, `number`, `array`, and `object` functions are intrinsics — they are always available without `use`. See [Built-in Functions](/docs/functions/) for the full list, and the individual reference pages for [text](/docs/library/text/), [number](/docs/library/number/), [array](/docs/library/array/), and [object](/docs/library/object/).

View File

@@ -1,12 +1,19 @@
# array
---
title: "array"
description: "Array creation and manipulation"
weight: 30
type: "docs"
---
The `array` function and its methods handle array creation and manipulation.
The `array` function is an intrinsic (always available, no `use()` needed). It is **polymorphic** — its behavior depends on the type of the first argument.
## Creation
## From a Number
Create an array of a given size.
### array(number)
Create an array of specified size, filled with `null`.
All elements initialized to `null`.
```javascript
array(3) // [null, null, null]
@@ -14,24 +21,36 @@ array(3) // [null, null, null]
### array(number, initial)
Create an array with initial values.
All elements initialized to a value. If initial is a function, it is called for each element (passed the index if arity >= 1).
```javascript
array(3, 0) // [0, 0, 0]
array(3, i => i * 2) // [0, 2, 4]
```
## From an Array
Copy, map, concat, or slice.
### array(array)
Copy an array.
Copy an array (mutable).
```javascript
var copy = array(original)
```
### array(array, function)
Map — call function with each element, collect results.
```javascript
array([1, 2, 3], x => x * 2) // [2, 4, 6]
```
### array(array, from, to)
Slice an array.
Slice — extract a sub-array. Negative indices count from end.
```javascript
array([1, 2, 3, 4, 5], 1, 4) // [2, 3, 4]
@@ -40,32 +59,36 @@ array([1, 2, 3], -2) // [2, 3]
### array(array, another)
Concatenate arrays.
Concatenate two arrays.
```javascript
array([1, 2], [3, 4]) // [1, 2, 3, 4]
```
### array(object)
## From a Record
Get keys of an object.
### array(record)
Get the keys of a record as an array of text.
```javascript
array({a: 1, b: 2}) // ["a", "b"]
```
## From Text
### array(text)
Split text into grapheme clusters.
Split text into individual characters (grapheme clusters). This is the standard way to iterate over characters in a string.
```javascript
array("hello") // ["h", "e", "l", "l", "o"]
array("👨‍👩‍👧") // ["👨‍👩‍👧"]
array("hello") // ["h", "e", "l", "l", "o"]
array("ƿit") // ["ƿ", "i", "t"]
```
### array(text, separator)
Split text by separator.
Split text by a separator string.
```javascript
array("a,b,c", ",") // ["a", "b", "c"]
@@ -73,7 +96,7 @@ array("a,b,c", ",") // ["a", "b", "c"]
### array(text, length)
Split text into chunks.
Dice text into chunks of a given length.
```javascript
array("abcdef", 2) // ["ab", "cd", "ef"]
@@ -87,13 +110,13 @@ Iterate over elements.
```javascript
array.for([1, 2, 3], function(el, i) {
log.console(i, el)
print(i, el)
})
// With early exit
array.for([1, 2, 3, 4], function(el) {
if (el > 2) return true
log.console(el)
print(el)
}, false, true) // prints 1, 2
```

View File

@@ -1,4 +1,9 @@
# blob
---
title: "blob"
description: "Binary data containers (bits, not bytes)"
weight: 50
type: "docs"
---
Blobs are binary large objects — containers of bits (not bytes). They're used for encoding data, messages, images, network payloads, and more.

View File

@@ -1,4 +1,9 @@
# json
---
title: "json"
description: "JSON encoding and decoding"
weight: 80
type: "docs"
---
JSON encoding and decoding.
@@ -10,10 +15,14 @@ var json = use('json')
### json.encode(value, space, replacer, whitelist)
Convert a value to JSON text.
Convert a value to JSON text. With no `space` argument, output is pretty-printed with 1-space indent. Pass `false` or `0` for compact single-line output.
```javascript
json.encode({a: 1, b: 2})
// '{ "a": 1, "b": 2 }'
// Compact (no whitespace)
json.encode({a: 1, b: 2}, false)
// '{"a":1,"b":2}'
// Pretty print with 2-space indent
@@ -27,7 +36,7 @@ json.encode({a: 1, b: 2}, 2)
**Parameters:**
- **value** — the value to encode
- **space** — indentation (number of spaces or string)
- **space** — indentation: number of spaces, string, or `false`/`0` for compact output. Default is pretty-printed.
- **replacer** — function to transform values
- **whitelist** — array of keys to include
@@ -86,5 +95,5 @@ var config_text = json.encode(config, 2)
// Load configuration
var loaded = json.decode(config_text)
log.console(loaded.debug) // true
print(loaded.debug) // true
```

View File

@@ -1,10 +1,15 @@
# math
---
title: "math"
description: "Trigonometry, logarithms, and roots"
weight: 70
type: "docs"
---
Cell provides three math modules with identical functions but different angle representations:
ƿit provides three math modules with identical functions but different angle representations:
```javascript
var math = use('math/radians') // angles in radians
var math = use('math/degrees') // angles in degrees
var math = use('math/degrees') // angles in degrees
var math = use('math/cycles') // angles in cycles (0-1)
```
@@ -35,7 +40,7 @@ math.tangent(math.pi / 4) // 1 (radians)
Inverse sine.
```javascript
math.arc_sine(1) // π/2 (radians)
math.arc_sine(1) // pi/2 (radians)
```
### arc_cosine(n)
@@ -43,7 +48,7 @@ math.arc_sine(1) // π/2 (radians)
Inverse cosine.
```javascript
math.arc_cosine(0) // π/2 (radians)
math.arc_cosine(0) // pi/2 (radians)
```
### arc_tangent(n, denominator)
@@ -51,9 +56,9 @@ math.arc_cosine(0) // π/2 (radians)
Inverse tangent. With two arguments, computes atan2.
```javascript
math.arc_tangent(1) // π/4 (radians)
math.arc_tangent(1, 1) // π/4 (radians)
math.arc_tangent(-1, -1) // -3π/4 (radians)
math.arc_tangent(1) // pi/4 (radians)
math.arc_tangent(1, 1) // pi/4 (radians)
math.arc_tangent(-1, -1) // -3pi/4 (radians)
```
## Exponentials and Logarithms
@@ -64,7 +69,7 @@ Euler's number raised to a power. Default power is 1.
```javascript
math.e() // 2.718281828...
math.e(2) // e²
math.e(2) // e^2
```
### ln(n)
@@ -130,21 +135,21 @@ math.e() // 2.71828...
var math = use('math/radians')
// Distance between two points
function distance(x1, y1, x2, y2) {
var distance = function(x1, y1, x2, y2) {
var dx = x2 - x1
var dy = y2 - y1
return math.sqrt(dx * dx + dy * dy)
}
// Angle between two points
function angle(x1, y1, x2, y2) {
var angle = function(x1, y1, x2, y2) {
return math.arc_tangent(y2 - y1, x2 - x1)
}
// Rotate a point
function rotate(x, y, angle) {
var c = math.cosine(angle)
var s = math.sine(angle)
var rotate = function(x, y, a) {
var c = math.cosine(a)
var s = math.sine(a)
return {
x: x * c - y * s,
y: x * s + y * c

View File

@@ -1,6 +1,11 @@
# number
---
title: "number"
description: "Numeric conversion and operations"
weight: 20
type: "docs"
---
The `number` function and its methods handle numeric conversion and operations.
The `number` function is an intrinsic (always available, no `use()` needed). It is **polymorphic** — its behavior depends on the type of the first argument.
## Conversion
@@ -29,15 +34,15 @@ Parse formatted numbers.
| Format | Description |
|--------|-------------|
| `""` | Standard decimal |
| `"u"` | Underbar separator (1_000) |
| `"d"` | Comma separator (1,000) |
| `"s"` | Space separator (1 000) |
| `"v"` | European (1.000,50) |
| `"b"` | Binary |
| `"o"` | Octal |
| `"h"` | Hexadecimal |
| `"j"` | JavaScript style (0x, 0o, 0b prefixes) |
| `""` | Standard decimal |
| `"u"` | Underbar separator (1_000) |
| `"d"` | Comma separator (1,000) |
| `"s"` | Space separator (1 000) |
| `"v"` | European (1.000,50) |
| `"b"` | Binary |
| `"o"` | Octal |
| `"h"` | Hexadecimal |
| `"j"` | JavaScript style (0x, 0o, 0b prefixes) |
```javascript
number("1,000", "d") // 1000
@@ -118,20 +123,20 @@ Get the fractional part.
fraction(4.75) // 0.75
```
### min(...values)
### min(a, b)
Return the smallest value.
Return the smaller of two numbers.
```javascript
min(3, 1, 4, 1, 5) // 1
min(3, 5) // 3
```
### max(...values)
### max(a, b)
Return the largest value.
Return the larger of two numbers.
```javascript
max(3, 1, 4, 1, 5) // 5
max(3, 5) // 5
```
### remainder(dividend, divisor)

View File

@@ -1,8 +1,13 @@
# object
---
title: "object"
description: "Object creation and manipulation"
weight: 40
type: "docs"
---
The `object` function and related utilities handle object creation and manipulation.
The `object` function is an intrinsic (always available, no `use()` needed). It is **polymorphic** — its behavior depends on the types of its arguments.
## Creation
## From a Record
### object(obj)
@@ -29,6 +34,8 @@ Select specific keys.
object({a: 1, b: 2, c: 3}, ["a", "c"]) // {a: 1, c: 3}
```
## From an Array of Keys
### object(keys)
Create object from keys (values are `true`).
@@ -60,9 +67,9 @@ object(["a", "b", "c"], (k, i) => i) // {a: 0, b: 1, c: 2}
Create a new object with the given prototype.
```javascript
var animal = {speak: function() { log.console("...") }}
var animal = {speak: function() { print("...") }}
var dog = meme(animal)
dog.speak = function() { log.console("woof") }
dog.speak = function() { print("woof") }
```
### proto(obj)
@@ -104,9 +111,4 @@ var obj = {a: 1, b: 2, c: 3}
// Get all keys
var keys = array(obj) // ["a", "b", "c"]
// Iterate
for (var key in obj) {
log.console(key, obj[key])
}
```

View File

@@ -1,4 +1,9 @@
# random
---
title: "random"
description: "Random number generation"
weight: 90
type: "docs"
---
Random number generation.
@@ -43,7 +48,7 @@ var random = use('random')
var coin_flip = random.random() < 0.5
// Random element from array
function pick(arr) {
var pick = function(arr) {
return arr[random.random_whole(length(arr))]
}
@@ -51,11 +56,14 @@ var colors = ["red", "green", "blue"]
var color = pick(colors)
// Shuffle array
function shuffle(arr) {
var shuffle = function(arr) {
var result = array(arr) // copy
for (var i = length(result) - 1; i > 0; i--) {
var j = random.random_whole(i + 1)
var temp = result[i]
var i = length(result) - 1
var j = 0
var temp = null
for (i = length(result) - 1; i > 0; i--) {
j = random.random_whole(i + 1)
temp = result[i]
result[i] = result[j]
result[j] = temp
}
@@ -63,8 +71,8 @@ function shuffle(arr) {
}
// Random in range
function random_range(min, max) {
return min + random.random() * (max - min)
var random_range = function(lo, hi) {
return lo + random.random() * (hi - lo)
}
var x = random_range(-10, 10) // -10 to 10

View File

@@ -1,19 +1,28 @@
# text
---
title: "text"
description: "String conversion and manipulation"
weight: 10
type: "docs"
---
The `text` function and its methods handle string conversion and manipulation.
The `text` function is an intrinsic (always available, no `use()` needed). It is **polymorphic** — its behavior depends on the type of the first argument.
## Conversion
To split text into characters, use `array(text)` — see [array](/docs/library/array/).
## From an Array
### text(array, separator)
Convert an array to text, joining elements with a separator (default: space).
Join array elements into text with a separator (default: empty string).
```javascript
text([1, 2, 3]) // "1 2 3"
text([1, 2, 3], ", ") // "1, 2, 3"
text(["a", "b"], "-") // "a-b"
text(["h", "e", "l", "l", "o"]) // "hello"
text([1, 2, 3], ", ") // "1, 2, 3"
text(["a", "b"], "-") // "a-b"
```
## From a Number
### text(number, radix)
Convert a number to text. Radix is 2-36 (default: 10).
@@ -24,13 +33,16 @@ text(255, 16) // "ff"
text(255, 2) // "11111111"
```
## From Text
### text(text, from, to)
Extract a substring from index `from` to `to`.
Extract a substring from index `from` to `to`. Negative indices count from end.
```javascript
text("hello world", 0, 5) // "hello"
text("hello world", 6) // "world"
text("hello", -3) // "llo"
```
## Methods
@@ -101,7 +113,7 @@ text.format("{0} + {1} = {2}", [1, 2, 3])
Unicode normalize the text (NFC form).
```javascript
text.normalize("café") // normalized form
text.normalize("cafe\u0301") // normalized form
```
### text.codepoint(text)
@@ -109,8 +121,7 @@ text.normalize("café") // normalized form
Get the Unicode codepoint of the first character.
```javascript
text.codepoint("A") // 65
text.codepoint("😀") // 128512
text.codepoint("A") // 65
```
### text.extract(text, pattern, from, to)

View File

@@ -1,4 +1,9 @@
# time
---
title: "time"
description: "Time constants and conversion functions"
weight: 60
type: "docs"
---
The time module provides time constants and conversion functions.
@@ -96,7 +101,7 @@ var last_week = now - time.week
var later = now + (2 * time.hour)
// Format future time
log.console(time.text(tomorrow))
print(time.text(tomorrow))
```
## Example
@@ -108,9 +113,9 @@ var time = use('time')
var start = time.number()
// ... do work ...
var elapsed = time.number() - start
log.console(`Took ${elapsed} seconds`)
print(`Took ${elapsed} seconds`)
// Schedule for tomorrow
var tomorrow = time.number() + time.day
log.console(`Tomorrow: ${time.text(tomorrow, "yyyy-MM-dd")}`)
print(`Tomorrow: ${time.text(tomorrow, "yyyy-MM-dd")}`)
```

244
docs/logging.md Normal file
View File

@@ -0,0 +1,244 @@
---
title: "Logging"
description: "Configurable channel-based logging with sinks"
weight: 25
type: "docs"
---
Logging in ƿit is channel-based. Any `log.X(value)` call writes to channel `"X"`. Channels are routed to **sinks** — named destinations that format and deliver log output to the console or to files.
## Channels
Three channels are conventional:
| Channel | Usage |
|---------|-------|
| `log.console(msg)` | General output |
| `log.error(msg)` | Errors and warnings |
| `log.system(msg)` | Internal system messages |
Any name works. `log.debug(msg)` creates channel `"debug"`, `log.perf(msg)` creates `"perf"`, and so on.
```javascript
log.console("server started on port 8080")
log.error("connection refused")
log.debug({query: "SELECT *", rows: 42})
```
Non-text values are JSON-encoded automatically.
## Default Behavior
With no configuration, a default sink routes `console`, `error`, and `system` to the terminal in pretty format. The `error` channel includes a stack trace by default:
```
[a3f12] [console] server started on port 8080
[a3f12] [error] connection refused
at handle_request (server.ce:42:3)
at main (main.ce:5:1)
```
The format is `[actor_id] [channel] message`. Error stack traces are always on unless you explicitly configure a sink without them.
## Configuration
Logging is configured in `.cell/log.toml`. Each `[sink.NAME]` section defines a sink.
```toml
[sink.terminal]
type = "console"
format = "bare"
channels = ["console"]
[sink.errors]
type = "file"
path = ".cell/logs/errors.jsonl"
channels = ["error"]
[sink.everything]
type = "file"
path = ".cell/logs/all.jsonl"
channels = ["*"]
exclude = ["console"]
```
### Sink fields
| Field | Values | Description |
|-------|--------|-------------|
| `type` | `"console"`, `"file"` | Where output goes |
| `format` | `"pretty"`, `"bare"`, `"json"` | How output is formatted |
| `channels` | array of names, or `["*"]` | Which channels this sink receives. Quote `'*'` on the CLI to prevent shell glob expansion. |
| `exclude` | array of names | Channels to skip (useful with `"*"`) |
| `stack` | array of channel names | Channels that capture a stack trace |
| `path` | file path | Output file (file sinks only) |
### Formats
**pretty** — human-readable, one line per message. Includes actor ID, channel, source location, and message.
```
[a3f12] [console] main.ce:5 server started
```
**bare** — minimal. Actor ID and message only.
```
[a3f12] server started
```
**json** — structured JSONL (one JSON object per line). Used for file sinks and machine consumption.
```json
{"actor_id":"a3f12...","timestamp":1702656000.5,"channel":"console","event":"server started","source":{"file":"main.ce","line":5,"col":3,"fn":"init"}}
```
## Log Records
Every log call produces a record:
```javascript
{
actor_id: "a3f12...", // full actor GUID
timestamp: 1702656000.5, // seconds since epoch
channel: "console", // channel name
event: "the message", // value passed to log
source: {
file: "main.ce",
line: 5,
col: 3,
fn: "init"
}
}
```
File sinks write one JSON-encoded record per line. Console sinks format the record according to their format setting.
## Stack Traces
The `error` channel captures stack traces by default. To enable stack traces for other channels, add a `stack` field to a sink — an array of channel names that should include a call stack.
Via the CLI:
```bash
pit log add terminal console --channels=console,error,debug --stack=error,debug
```
Or in `log.toml`:
```toml
[sink.terminal]
type = "console"
format = "bare"
channels = ["console", "error", "debug"]
stack = ["error", "debug"]
```
Only channels listed in `stack` get stack traces. Other channels on the same sink print without one:
```
[a3f12] server started
[a3f12] connection failed
at handle_request (server.ce:42:3)
at process (router.ce:18:5)
at main (main.ce:5:1)
```
With JSON format, a `stack` array is added to the record for channels that have stack capture enabled:
```json
{"actor_id":"a3f12...","channel":"error","event":"connection failed","source":{"file":"server.ce","line":42,"col":3,"fn":"handle_request"},"stack":[{"fn":"handle_request","file":"server.ce","line":42,"col":3},{"fn":"process","file":"router.ce","line":18,"col":5},{"fn":"main","file":"main.ce","line":5,"col":1}]}
```
Channels without `stack` configuration produce no stack field. Capturing stacks adds overhead — enable it for debugging, not production.
## CLI
The `pit log` command manages sinks and reads log files. See [CLI — pit log](/docs/cli/#pit-log) for the full reference.
```bash
pit log list # show sinks
pit log add terminal console --format=bare --channels=console
pit log add dump file .cell/logs/dump.jsonl '--channels=*' --exclude=console
pit log add debug console --channels=error,debug --stack=error,debug
pit log remove terminal
pit log read dump --lines=20 --channel=error
pit log tail dump
```
## Examples
### Development setup
Route console output to the terminal with minimal formatting. Send everything else to a structured log file for debugging.
```toml
[sink.terminal]
type = "console"
format = "bare"
channels = ["console"]
[sink.debug]
type = "file"
path = ".cell/logs/debug.jsonl"
channels = ["*"]
exclude = ["console"]
```
```javascript
log.console("listening on :8080") // -> terminal: [a3f12] listening on :8080
log.error("bad request") // -> debug.jsonl only
log.debug({latency: 0.042}) // -> debug.jsonl only
```
### Separate error log
Keep a dedicated error log alongside a full dump.
```toml
[sink.terminal]
type = "console"
format = "pretty"
channels = ["console", "error", "system"]
[sink.errors]
type = "file"
path = ".cell/logs/errors.jsonl"
channels = ["error"]
[sink.all]
type = "file"
path = ".cell/logs/all.jsonl"
channels = ["*"]
```
### JSON console
Output structured JSON to the console for piping into other tools.
```toml
[sink.json_out]
type = "console"
format = "json"
channels = ["console", "error"]
```
```bash
pit run myapp.ce | jq '.event'
```
### Reading logs
```bash
# Last 50 error entries
pit log read errors --lines=50
# Errors since a timestamp
pit log read errors --since=1702656000
# Filter a wildcard sink to one channel
pit log read all --channel=debug --lines=10
# Follow a log file in real time
pit log tail all
```

View File

@@ -1,248 +0,0 @@
# Cell actor scripting language
Cell is a Misty [https://mistysystem.com](https://mistysystem.com) implementation.
## Memory
Values are 32 bit for 32 bit builds and 64 bit for 64 bit builds.
### 32 bit value
LSB = 0
payload is a 31 bit signed int
LSB = 01
payload is a 30 bit pointer
LSB = 11
next 3 bits = special tag. 27 bits of payload.
### 64 bit value
LSB = 0
payload is a 32 bit signed int, using high 32 bits
LSB = 01
payload is a 61 bit pointer
LSB = 101
Short float: a 61 bit double, with 3 less exponent bits
LSB = 11
Special tag: next 3 bits. 5 bits total. 59 bits of payload. 8 total special tags.
Special tags:
1: Bool. Payload is 0 or 1.
2: null. payload is 0.
3: exception.
4: string.
Immediate string. Next 3 low bits = length in bytes. Rest is string data. This allows for strings up to 7 ascii letters. Encoded in utf8.
## Numbers and math
Cell can be compiled with different levels of exactness for numeracy. Any number which cannot be represented exactly becomes "null". Any numeric operation which includes "null" results in "null".
Using short floats in a 64 bit system means you have doubles in the range of +- 10^38, not the full range of double. If you create a number out of that range, it's null.
You can also compile a 64 bit system with full precision doubles, but this will use more memory and may be slower.
You can also compile a 64 bit system with 32 bit floats, stored as a 32 bit int is. Again, out of the 32 bit float range = null.
You can compile without floating point support at all; 32 bit ints are then used for fixed point calculations.
Or, you can compile using Dec64, which is a 64 bit decimal floating point format, for exact precision.
## Objects
Objects are heap allocated, referenced by a pointer value. They are all preceded by an object header, the length of a word on the system.
### 64 bit build
56 bits capacity
1 bit memory reclamation flag: note that this obj has already been moved
2 bit reserved (per object)
1 bit stone: note that this obj is immutable
3 bit type: note the type of the object
1 bit: fwd: note that this obj is a forward linkage
Last bit ..1:
The forward type indicates that the object (an array, blob, pretext, or record) has grown beyond its capacity and is now residing at a new address. The remaining 63 bits contain the address of the enlarged object. Forward linkages are cleaned up by the memory reclaimer.
Type 7: C light C object
Header
Pointer
Capacity is an ID of a registered C type.
Pointer is a pointer to the opaque C object.
Type 0: Array
Header
Length
Element[]
Capacity is number of elements the array can hold. Length is number of elements in use. Number of words used by an array is capacity + 2.
Type 1: blob
Header
Length
Bit[]
Capacity is number of bits the blob can hold. Length is number of bits in use. Bits follow, from [0] to [capacity - 1], with [0] bit in the most significant position of word 2, and [63] in the least significant position of word 2. The last word is zero filled, if necessary.
Number of words used is (capacity + 63) // 64 + 2
Type 2: Text
Text has two forms, depending on if it is stone or not, which changes the meaning of its length word.
Header
Length(pretext) or Hash(text)
Character[0] and character[1]
Capacity of pretex is the number of characters it can hold. During stoning and reclamation, capacity is set to the length.
The capacity of a text is its length.
The length of a pretext is the number of characters it contains; it is not greater than the capacity.
Hash of a text is used for organizing records. If the hash is zero, it's not been computed yet. All texts in the immutable memory have hashes.
A text object contains UTF32 characters, packed two per word. If the number of characters is odd, the least significant half of the last word is zero filled.
The number of words used by a text is (capacity + 1) // 2 + 2
Type 3: Record
A record is an array of fields represented as key/value pairs. Fields are located by hashes of texts, using open addressing with linear probing and lazy deletion. The load factor is less than 0.5.
Header
Prototype
Length
Key[0]
Value[0]
Key[1]
Value[1]
...
The capacity is the number of fields the record can hold. It is a power of two minus one. It is at least twice the length.
The length is the number of fields that the record currently contains.
A field candidate number is identified by and(key.hash, capacity). In case of hash collision, advance to the next field. If this goes past the end, continue with field 1. Field 0 is reserved.
The "exception" special tag is used to mark deleted entries in the object map.
The number of words used by a record is (capacity + 1) * 2.
Prototypes are searched for for properties if one cannot be found on the record itself. Prototypes can have prototypes.
#### key[0] and value[0]
These are reserved for internal use, and skipped over during key probing.
The first 32 bits of key are used as a 32 bit integer key, if this object has ever been used as a key itself.
The last 32 bits are used as an opaque C class key. C types can be registered with the system, and each are assigned a monotonically increasing number. In the case that this object has a C type, then the bottom 32 bits of key[0] are not 0. If that is the case, then a pointer to its C object is stored in value[0].
#### Valid keys & Hashing
Keys are stored directly in object maps. There are three possibilities for a vaild key: an object text, an object record, or an immediate text.
In the case of an immediate text, the hash is computed on the fly using the fash64_hash_one function, before being used to look up the key in the object map. Direct value comparison is used to confirm the key.
For object texts (texts longer than 7 ascii chars), the hash is stored in the text object itself. When an object text is used as a key, a stone version is created and interned. Any program static texts reference this stoned, interned text. When looking up a heap text as a key, it is first discovered if it's in the interned table. If it's not, the key is not in the object (since all keys are interned). If it is, the interned version is returned to check against the object map. The hash of the interned text is used to look up the key in the object map, and then direct pointer comparison is used to confirm the key.
For record keys, these are unique; once a record is used as a key, it gets assigned a monotonically increasing 32 bit integer, stored in key[0]. When checking it in an object map, the integer is used directly as the key. If key[0] is 0, the record has not been used as a key yet. If it's not 0, fash64_hash_one is used to compute a hash of its ID, and then direct value pointer comparison is used to confirm.
### Text interning
Texts that cannot fit in an immediate, and which are used as an object key, create a stoned and interned version (the pointer which is used as the key). Any text literals are also stoned and interned.
The interning table is an open addressed hash, with a load of 0.8, using a robin hood value. Probing is done using the text hash, confirmation is done using length, and then memcmp of the text.
When the GC run, a new interned text table is created. Each text literal, and each text used as a key, is added to the new table, as the live objects are copied. This keeps the interning table from becoming a graveyard. Interned values are never deleted until a GC.
Type 4: Function
Header
Code
Outer
A function object has zero capacity and is always stone.
Code is a pointer to the code object that the function executes.
Outer is a pointer to the frame that created this function object.
Size is 3 words.
Type 5: Frame
Header
Function
Caller
Return address
The activation frame is created when a function is invoked to hold its linkages and state.
The capacity is the number of slots, including the inputs, variables, temporaries, and the four words of overhead. A frame, unlike the other types, is never stone.
The function is the address of the function object being called.
The caller is the address of the frame that is invoking the function.
The return address is the address of the instruction in the code that should be executed upon return.
Next come the input arguments, if any.
Then the variables closed over by the inner functions.
Then the variables that are not closed over, followed by the temporaries.
When a function returns, the caller is set to zero. This is a signal to the memory reclaimer that the frame can be reduced.
Type 6: Code
Header
Arity
Size
Closure size
Entry point
Disruption point
A code object exists in the actor's immutable memory. A code object never exists in mutable memory.
A code object has a zero capacity and is always stone.
The arity is the maximum number of inputs.
The size is the capacity of an activation frame that will execute this code.
The closure size is a reduced capacity for returned frames that survive memory reclamation.
The entry point is the address at which to begin execution.
The disruption point is the address of the disruption clause.
### opaque C objects
Records can have opaque C data attached to them.
A C class can register a GC clean up, and a GC trace function. The trace function is called when the record is encountered in the live object graph; and it should mark any values it wants to keep alive in that function.
The system maintains an array of live opaque C objects. When such an object is encountered, it marks it as live in the array. When the GC completes, it iterates this array and calls the GC clean up function for each C object in the array with alive=0. Alive is then cleared for the next GC cycle.
## 32 bit build
~3 bit type
1 bit stone
1 bit memory reclamation flag
27 bit capacity
Key differences here are
blob max capacity is 2**27 bits = 2**24 bytes = 16 MB [this likely needs addressed]
fwd is type ...0, and the pointer is 31 bits
other types are
111 array
101 object
011 blob
001
## Memory
Cell uses a single block of memory that it doles out as needed to the actors in its system.
Actors are given a block of memory in standard sizes using a doubling buddy memory manager. An actor is given an immutable data section on birth, as well as a mutable data section. When its mutable data becomes full, it requests a new one. Actors utilize their mutable memory with a simple bump allocation. If there is not sufficient memory available, the actor suspends and its status changes to exhausted.
The smallest block size is determined per platform, but it can be as small as 4KB on 64 bit systems.
The actor is then given a new block of memory of the same size, and it runs a garbage collector to reclaim memory. It uses the cheney copying algorithm. If a disappointing amount of memory was reclaimed, it is noted, and the actor is given a larger block of memory on the next request.

156
docs/nota.md Normal file
View File

@@ -0,0 +1,156 @@
---
title: "Nota Format"
description: "Network Object Transfer Arrangement"
weight: 85
type: "docs"
---
Nota is a binary message format developed for use in the Procession Protocol. It provides a compact, JSON-like encoding that supports blobs, text, arrays, records, numbers, and symbols. Nota is an internal module: `use('internal/nota')`.
Nota stands for Network Object Transfer Arrangement.
## Design Philosophy
JSON had three design rules: minimal, textual, and subset of JavaScript. The textual and JavaScript rules are no longer necessary. Nota maintains JSON's philosophy of being at the intersection of most programming languages and most data types, but departs by using counts instead of brackets and binary encoding instead of text.
Nota uses Kim continuation bytes for counts and character encoding. See [Kim Encoding](#kim) for details.
## Type Summary
| Bits | Type |
|------|------|
| `000` | Blob |
| `001` | Text |
| `010` | Array |
| `011` | Record |
| `100` | Floating Point (positive exponent) |
| `101` | Floating Point (negative exponent) |
| `110` | Integer (zero exponent) |
| `111` | Symbol |
## Preambles
Every Nota value starts with a preamble byte that is a Kim value with the three most significant bits used for type information.
Most types provide 3 or 4 data bits in the preamble. If the Kim encoding of the data fits in those bits, it is incorporated directly and the continue bit is off. Otherwise the continue bit is on and the continuation follows.
## Blob
```
C 0 0 0 D D D D
```
- **C** — continue the number of bits
- **DDDD** — the number of bits
A blob is a string of bits. The data produces the number of bits. The number of bytes that follow: `floor((number_of_bits + 7) / 8)`. The final byte is padded with 0 if necessary.
Example: A blob containing 25 bits `1111000011100011001000001`:
```
80 19 F0 E3 20 80
```
## Text
```
C 0 0 1 D D D D
```
- **C** — continue the number of characters
- **DDDD** — the number of characters
The data produces the number of characters. Kim-encoded characters follow. ASCII characters are 1 byte, first quarter BMP characters are 2 bytes, all other Unicode characters are 3 bytes. Unlike JSON, there is never a need for escapement.
Examples:
```
"" → 10
"cat" → 13 63 61 74
```
## Array
```
C 0 1 0 D D D D
```
- **C** — continue the number of elements
- **DDDD** — the number of elements
An array is an ordered sequence of values. Following the preamble are the elements, each beginning with its own preamble. Nesting is encouraged.
## Record
```
C 0 1 1 D D D D
```
- **C** — continue the number of pairs
- **DDDD** — the number of pairs
A record is an unordered collection of key/value pairs. Keys must be text and must be unique within the record. Values can be any Nota type.
## Floating Point
```
C 1 0 E S D D D
```
- **C** — continue the exponent
- **E** — sign of the exponent
- **S** — sign of the coefficient
- **DDD** — three bits of the exponent
Nota floating point represents numbers as `coefficient * 10^exponent`. The coefficient must be an integer. The preamble may contain the first three bits of the exponent, followed by the continuation of the exponent (if any), followed by the coefficient.
Use the integer type when the exponent is zero.
Examples:
```
-1.01 → 5A 65
98.6 → 51 87 5A
-0.5772156649 → D8 0A 95 C0 B0 BD 69
-10000000000000 → C8 0D 01
```
## Integer
```
C 1 1 0 S D D D
```
- **C** — continue the integer
- **S** — sign
- **DDD** — three bits of the integer
Integers in the range -7 to 7 fit in a single byte. Integers in the range -1023 to 1023 fit in two bytes. Integers in the range -131071 to 131071 fit in three bytes.
Examples:
```
0 → 60
2023 → E0 8F 67
-1 → 69
```
## Symbol
```
0 1 1 1 D D D D
```
- **DDDD** — the symbol
There are currently five symbols:
```
null → 70
false → 72
true → 73
private → 78
system → 79
```
The private prefix must be followed by a record containing a private process address. The system prefix must be followed by a record containing a system message. All other symbols are reserved.

View File

@@ -1,6 +1,11 @@
# Packages
---
title: "Packages"
description: "Code organization and sharing in ƿit"
weight: 30
type: "docs"
---
Packages are the fundamental unit of code organization and sharing in Cell.
Packages are the fundamental unit of code organization and sharing in ƿit.
## Package Structure
@@ -8,13 +13,14 @@ A package is a directory containing a `cell.toml` manifest:
```
mypackage/
├── cell.toml # package manifest
├── cell.toml # package manifest
├── main.ce # entry point (optional)
├── utils.cm # module
├── helper/
│ └── math.cm # nested module
├── render.c # C extension
└── _internal.cm # private module (underscore prefix)
└── internal/
└── helpers.cm # private module (internal/ only)
```
## cell.toml
@@ -38,11 +44,11 @@ mylib = "/Users/john/work/mylib"
## Module Resolution
When importing with `use()`, Cell searches in order:
When importing with `use()`, ƿit searches in order:
1. **Local package** — relative to package root
2. **Dependencies** — via aliases in `cell.toml`
3. **Core** — built-in Cell modules
3. **Core** — built-in ƿit modules
```javascript
// In package 'myapp' with dependency: renderer = "gitea.pockle.world/john/renderer"
@@ -55,12 +61,12 @@ use('json') // core module
### Private Modules
Files starting with underscore are private:
Files in the `internal/` directory are private to their package:
```javascript
// _internal.cm is only accessible within the same package
use('internal') // OK from same package
use('myapp/internal') // Error from other packages
// internal/helpers.cm is only accessible within the same package
use('internal/helpers') // OK from same package
use('myapp/internal/helpers') // Error from other packages
```
## Package Locators
@@ -85,7 +91,7 @@ Local packages are symlinked into the shop, making development seamless.
## The Shop
Cell stores all packages in the **shop** at `~/.cell/`:
ƿit stores all packages in the **shop** at `~/.cell/`:
```
~/.cell/
@@ -99,11 +105,8 @@ Cell stores all packages in the **shop** at `~/.cell/`:
│ └── john/
│ └── work/
│ └── mylib -> /Users/john/work/mylib
├── lib/
│ ├── local.dylib
│ └── gitea_pockle_world_john_prosperon.dylib
├── build/
│ └── <content-addressed cache>
│ └── <content-addressed cache (bytecode, dylibs, manifests)>
├── cache/
│ └── <downloaded zips>
├── lock.toml
@@ -134,20 +137,20 @@ target = "/Users/john/work/prosperon"
```bash
# Install from remote
cell install gitea.pockle.world/john/prosperon
pit install gitea.pockle.world/john/prosperon
# Install from local path
cell install /Users/john/work/mylib
pit install /Users/john/work/mylib
```
## Updating Packages
```bash
# Update all
cell update
pit update
# Update specific package
cell update gitea.pockle.world/john/prosperon
pit update gitea.pockle.world/john/prosperon
```
## Development Workflow
@@ -156,28 +159,28 @@ For active development, link packages locally:
```bash
# Link a package for development
cell link add gitea.pockle.world/john/prosperon /Users/john/work/prosperon
pit link add gitea.pockle.world/john/prosperon /Users/john/work/prosperon
# Changes to /Users/john/work/prosperon are immediately visible
# Remove link when done
cell link delete gitea.pockle.world/john/prosperon
pit link delete gitea.pockle.world/john/prosperon
```
## C Extensions
C files in a package are compiled into a dynamic library:
C files in a package are compiled into per-file dynamic libraries stored in the content-addressed build cache:
```
mypackage/
├── cell.toml
├── render.c # compiled to mypackage.dylib
└── render.cm # optional Cell wrapper
├── render.c # compiled to ~/.cell/build/<hash>
└── physics.c # compiled to ~/.cell/build/<hash>
```
The library is named after the package and placed in `~/.cell/lib/`.
Each `.c` file gets its own `.dylib` at a content-addressed path in `~/.cell/build/`. A per-package manifest maps module names to their dylib paths so the runtime can find them — see [Dylib Manifests](/docs/shop/#dylib-manifests). A `.c` file and `.cm` file with the same stem at the same scope is a build error — use distinct names.
See [Writing C Modules](c-modules.md) for details.
See [Writing C Modules](/docs/c-modules/) for details.
## Platform-Specific Files
@@ -190,4 +193,4 @@ mypackage/
└── audio_emscripten.c # Web-specific
```
Cell selects the appropriate file based on the build target.
ƿit selects the appropriate file based on the build target.

176
docs/requestors.md Normal file
View File

@@ -0,0 +1,176 @@
---
title: "Requestors"
description: "Asynchronous work with requestors"
weight: 25
type: "docs"
---
Requestors are functions that encapsulate asynchronous work. They provide a structured way to compose callbacks, manage cancellation, and coordinate concurrent operations between actors.
## What is a Requestor
A requestor is a function with this signature:
```javascript
var my_requestor = function(callback, value) {
// Do async work, then call callback with result
// Return a cancel function
}
```
- **callback** — called when the work completes: `callback(value, reason)`
- On success: `callback(result)` or `callback(result, null)`
- On failure: `callback(null, reason)` where reason explains the failure
- **value** — input passed from the previous step (or the initial caller)
- **return** — a cancel function, or null if cancellation is not supported
The cancel function, when called, should abort the in-progress work.
## Writing a Requestor
```javascript
var fetch_data = function(callback, url) {
$contact(function(connection) {
send(connection, {get: url}, function(response) {
callback(response)
})
}, {host: url, port: 80})
return function() {
// clean up if needed
}
}
```
A requestor that always succeeds immediately:
```javascript
var constant = function(callback, value) {
callback(42)
}
```
A requestor that always fails:
```javascript
var broken = function(callback, value) {
callback(null, "something went wrong")
}
```
## Composing Requestors
ƿit provides four built-in functions for composing requestors into pipelines.
### sequence(requestor_array)
Run requestors one after another. Each result becomes the input to the next. The final result is passed to the callback.
```javascript
var pipeline = sequence([
fetch_user,
validate_permissions,
load_profile
])
pipeline(function(profile, reason) {
if (reason) {
print(reason)
} else {
print(profile.name)
}
}, user_id)
```
If any step fails, the remaining steps are skipped and the failure propagates.
### parallel(requestor_array, throttle, need)
Start all requestors concurrently. Results are collected into an array matching the input order.
```javascript
var both = parallel([
fetch_profile,
fetch_settings
])
both(function(results, reason) {
var profile = results[0]
var settings = results[1]
}, user_id)
```
- **throttle** — limit how many requestors run at once (null for no limit)
- **need** — minimum number of successes required (default: all)
### race(requestor_array, throttle, need)
Like `parallel`, but returns as soon as the needed number of results arrive. Unfinished requestors are cancelled.
```javascript
var fastest = race([
fetch_from_cache,
fetch_from_network,
fetch_from_backup
])
fastest(function(results) {
// results[0] is whichever responded first
}, request)
```
Default need is 1. Useful for redundant operations where only one result matters.
### fallback(requestor_array)
Try each requestor in order. If one fails, try the next. Return the first success.
```javascript
var resilient = fallback([
fetch_from_primary,
fetch_from_secondary,
use_cached_value
])
resilient(function(data, reason) {
if (reason) {
print("all sources failed")
}
}, key)
```
## Timeouts
Wrap any requestor with `$time_limit` to add a timeout:
```javascript
var timed = $time_limit(fetch_data, 5) // 5 second timeout
timed(function(result, reason) {
// reason will explain timeout if it fires
}, url)
```
If the requestor does not complete within the time limit, it is cancelled and the callback receives a failure.
## Requestors and Actors
Requestors are particularly useful with actor messaging. Since `send` is callback-based, it fits naturally:
```javascript
var ask_worker = function(callback, task) {
send(worker, task, function(reply) {
callback(reply)
})
}
var pipeline = sequence([
ask_worker,
process_result,
store_result
])
pipeline(function(stored) {
print("done")
$stop()
}, {type: "compute", data: [1, 2, 3]})
```

343
docs/semantic-index.md Normal file
View File

@@ -0,0 +1,343 @@
---
title: "Semantic Index"
description: "Index and query symbols, references, and call sites in source files"
weight: 55
type: "docs"
---
ƿit includes a semantic indexer that extracts symbols, references, call sites, and imports from source files. The index powers the LSP (find references, rename) and is available as a CLI tool for scripting and debugging.
## Overview
The indexer walks the parsed AST without modifying it. It produces a JSON structure that maps every declaration, every reference to that declaration, and every call site in a file.
```
source → tokenize → parse → fold → index
symbols, references,
call sites, imports,
exports, reverse refs
```
Two CLI commands expose this:
| Command | Purpose |
|---------|---------|
| `pit index <file>` | Produce the full semantic index as JSON |
| `pit explain` | Query the index for a specific symbol or position |
## pit index
Index a source file and print the result as JSON.
```bash
pit index <file.ce|file.cm>
pit index <file> -o output.json
```
### Output
The index contains these sections:
| Section | Description |
|---------|-------------|
| `imports` | All `use()` calls with local name, module path, resolved filesystem path, and span |
| `symbols` | Every declaration: vars, defs, functions, params |
| `references` | Every use of a name, classified as read, write, or call |
| `call_sites` | Every function call with callee, args count, and enclosing function |
| `exports` | For `.cm` modules, the keys of the top-level `return` record |
| `reverse_refs` | Inverted index: name to list of reference spans |
### Example
Given a file `graph.ce` with functions `make_node`, `connect`, and `build_graph`:
```bash
pit index graph.ce
```
```json
{
"version": 1,
"path": "graph.ce",
"is_actor": true,
"imports": [
{"local_name": "json", "module_path": "json", "resolved_path": ".cell/packages/core/json.cm", "span": {"from_row": 2, "from_col": 0, "to_row": 2, "to_col": 22}}
],
"symbols": [
{
"symbol_id": "graph.ce:make_node:fn",
"name": "make_node",
"kind": "fn",
"params": ["name", "kind"],
"doc_comment": "// A node in the graph.",
"decl_span": {"from_row": 6, "from_col": 0, "to_row": 8, "to_col": 1},
"scope_fn_nr": 0
}
],
"references": [
{"node_id": 20, "name": "make_node", "ref_kind": "call", "span": {"from_row": 17, "from_col": 13, "to_row": 17, "to_col": 22}}
],
"call_sites": [
{"node_id": 20, "callee": "make_node", "args_count": 2, "span": {"from_row": 17, "from_col": 22, "to_row": 17, "to_col": 40}}
],
"exports": [],
"reverse_refs": {
"make_node": [
{"node_id": 20, "ref_kind": "call", "span": {"from_row": 17, "from_col": 13, "to_row": 17, "to_col": 22}}
]
}
}
```
### Symbol Kinds
| Kind | Description |
|------|-------------|
| `fn` | Function (var or def with function value) |
| `var` | Mutable variable |
| `def` | Constant |
| `param` | Function parameter |
Each symbol has a `symbol_id` in the format `filename:name:kind` and a `decl_span` with `from_row`, `from_col`, `to_row`, `to_col` (0-based).
### Reference Kinds
| Kind | Description |
|------|-------------|
| `read` | Value is read |
| `write` | Value is assigned |
| `call` | Used as a function call target |
### Module Exports
For `.cm` files, the indexer detects the top-level `return` statement. If it returns a record literal, each key becomes an export linked to its symbol:
```javascript
// math_utils.cm
var add = function(a, b) { return a + b }
var sub = function(a, b) { return a - b }
return {add: add, sub: sub}
```
```bash
pit index math_utils.cm
```
The `exports` section will contain:
```json
[
{"name": "add", "symbol_id": "math_utils.cm:add:fn"},
{"name": "sub", "symbol_id": "math_utils.cm:sub:fn"}
]
```
## pit explain
Query the semantic index for a specific symbol or cursor position. This is the targeted query interface — instead of dumping the full index, it answers a specific question.
```bash
pit explain --span <file>:<line>:<col>
pit explain --symbol <name> <file>...
```
### --span: What is at this position?
Point at a line and column (0-based) to find out what symbol or reference is there.
```bash
pit explain --span demo.ce:6:4
```
If the position lands on a declaration, that symbol is returned along with all its references and call sites. If it lands on a reference, the indexer traces back to the declaration and returns the same information.
The result includes:
| Field | Description |
|-------|-------------|
| `symbol` | The resolved declaration (name, kind, params, doc comment, span) |
| `reference` | The reference at the cursor, if the cursor was on a reference |
| `references` | All references to this symbol across the file |
| `call_sites` | All call sites for this symbol |
| `imports` | The file's imports (for context) |
```json
{
"symbol": {
"name": "build_graph",
"symbol_id": "demo.ce:build_graph:fn",
"kind": "fn",
"params": [],
"doc_comment": "// Build a sample graph and return it."
},
"references": [
{"node_id": 71, "ref_kind": "call", "span": {"from_row": 39, "from_col": 12, "to_row": 39, "to_col": 23}}
],
"call_sites": []
}
```
### --symbol: Find a symbol by name
Look up a symbol by name. Pass one file for a focused result, or multiple files (including shell globs) to search across them all:
```bash
pit explain --symbol connect demo.ce
pit explain --symbol connect *.ce *.cm
```
```json
{
"symbols": [
{
"name": "connect",
"symbol_id": "demo.ce:connect:fn",
"kind": "fn",
"params": ["from", "to", "label"],
"doc_comment": "// Connect two nodes with a labeled edge."
}
],
"references": [
{"node_id": 29, "ref_kind": "call", "span": {"from_row": 21, "from_col": 2, "to_row": 21, "to_col": 9}},
{"node_id": 33, "ref_kind": "call", "span": {"from_row": 22, "from_col": 2, "to_row": 22, "to_col": 9}},
{"node_id": 37, "ref_kind": "call", "span": {"from_row": 23, "from_col": 2, "to_row": 23, "to_col": 9}}
],
"call_sites": [
{"callee": "connect", "args_count": 3, "span": {"from_row": 21, "from_col": 9, "to_row": 21, "to_col": 29}},
{"callee": "connect", "args_count": 3, "span": {"from_row": 22, "from_col": 9, "to_row": 22, "to_col": 31}},
{"callee": "connect", "args_count": 3, "span": {"from_row": 23, "from_col": 9, "to_row": 23, "to_col": 29}}
]
}
```
This tells you: `connect` is a function taking `(from, to, label)`, declared on line 11, and called 3 times inside `build_graph`.
## Programmatic Use
The index and explain modules can be used directly from ƿit scripts:
### Via shop (recommended)
```javascript
var shop = use('internal/shop')
var idx = shop.index_file(path)
```
`shop.index_file` runs the full pipeline (tokenize, parse, index, resolve imports) and caches the result.
### index.cm (direct)
If you already have a parsed AST and tokens, use `index_ast` directly:
```javascript
var index_mod = use('index')
var idx = index_mod.index_ast(ast, tokens, filename)
```
### explain.cm
```javascript
var explain_mod = use('explain')
var expl = explain_mod.make(idx)
// What is at line 10, column 5?
var result = expl.at_span(10, 5)
// Find all symbols named "connect"
var result = expl.by_symbol("connect")
// Get callers and callees of a symbol
var chain = expl.call_chain("demo.ce:connect:fn", 2)
```
For cross-file queries:
```javascript
var result = explain_mod.explain_across([idx1, idx2, idx3], "connect")
```
## LSP Integration
The semantic index powers these LSP features:
| Feature | LSP Method | Description |
|---------|------------|-------------|
| Find References | `textDocument/references` | All references to the symbol under the cursor |
| Rename | `textDocument/rename` | Rename a symbol and all its references |
| Prepare Rename | `textDocument/prepareRename` | Validate that the cursor is on a renameable symbol |
| Go to Definition | `textDocument/definition` | Jump to a symbol's declaration (index-backed with AST fallback) |
These work automatically in any editor with ƿit LSP support. The index is rebuilt on every file change.
## LLM / AI Assistance
The semantic index is designed to give LLMs the context they need to read and edit ƿit code accurately. ƿit is not in any training set, so an LLM cannot rely on memorized patterns — it needs structured information about names, scopes, and call relationships. The commands below are the recommended way to provide that.
### Understand a file before editing
Before modifying a file, index it to see its structure:
```bash
pit index file.ce
```
This gives the LLM every declaration, every reference, every call site, and the import list with resolved paths. Key things to extract:
- **`symbols`** — what functions exist, their parameters, and their doc comments. This is enough to understand the file's API without reading every line.
- **`imports`** with `resolved_path` — which modules are used, and where they live on disk. The LLM can follow these paths to read dependency source when it needs to understand a called function. Imports without a `resolved_path` are C built-ins (like `json`) with no script source to read.
- **`exports`** — for `.cm` modules, what the public API is. This tells the LLM what names other files can access.
### Investigate a specific symbol
When the LLM needs to rename, refactor, or understand a specific function:
```bash
pit explain --symbol update analysis.cm
```
This returns the declaration (with doc comment and parameter list), every reference, and every call site. The LLM can use this to:
- **Rename safely** — the references list has exact spans for every use of the name.
- **Understand callers** — `call_sites` shows where and how the function is called, including argument counts.
- **Read the doc comment** — often enough to understand intent without reading the function body.
### Investigate a cursor position
When the LLM is looking at a specific line and column (e.g., from an error message or a user selection):
```bash
pit explain --span file.ce:17:4
```
This resolves whatever is at that position — declaration or reference — back to the underlying symbol, then returns all references and call sites. Useful for "what is this name?" queries.
### Search across files
To find a symbol across multiple files, pass them all:
```bash
pit explain --symbol connect *.ce *.cm
pit explain --symbol send server.ce client.ce protocol.cm
```
This indexes each file and searches across all of them. The result merges all matching declarations, references, and call sites. Use this when the LLM needs to understand cross-file usage before making a change that touches multiple files.
### Import resolution
Every import in the index includes the original `module_path` (the string passed to `use()`). For script modules, it also includes `resolved_path` — the filesystem path the module resolves to. This lets the LLM follow dependency chains:
```json
{"local_name": "fd", "module_path": "fd", "resolved_path": ".cell/packages/core/fd.cm"}
{"local_name": "json", "module_path": "json"}
```
An import without `resolved_path` is a C built-in — no script source to read.
### Recommended workflow
1. **Start with `pit index`** on the file to edit. Scan imports and symbols for an overview.
2. **Use `pit explain --symbol`** to drill into any function the LLM needs to understand or modify. The doc comment and parameter list are usually sufficient.
3. **Follow `resolved_path`** on imports when the LLM needs to understand a dependency — index or read the resolved file.
4. **Before renaming**, use `pit explain --symbol` (or `--span`) to get all reference spans, then apply edits to each span.
5. **For cross-file changes**, pass all affected files to `pit explain --symbol` to see the full picture before editing.

233
docs/shop.md Normal file
View File

@@ -0,0 +1,233 @@
---
title: "Shop Architecture"
description: "How the shop resolves, compiles, caches, and loads modules"
weight: 35
type: "docs"
---
The shop is the module resolution and loading engine behind `use()`. It handles finding modules, compiling them, caching the results, and loading C extensions. The shop lives in `internal/shop.cm`.
## Startup Pipeline
When `pit` runs a program, startup takes one of two paths:
### Fast path (warm cache)
```
C runtime → engine.cm (from cache) → shop.cm → user program
```
The C runtime hashes the source of `internal/engine.cm` with BLAKE2 and looks up the hash in the content-addressed cache (`~/.cell/build/<hash>`). On a cache hit, engine.cm loads directly — no bootstrap involved.
### Cold path (first run or cache cleared)
```
C runtime → bootstrap.cm → (seeds cache) → engine.cm (from cache) → shop.cm → user program
```
On a cache miss, the C runtime loads `boot/bootstrap.cm.mcode` (a pre-compiled seed). Bootstrap compiles engine.cm and the pipeline modules (tokenize, parse, fold, mcode, streamline) from source and caches the results. The C runtime then retries the engine cache lookup, which now succeeds.
### Engine
**engine.cm** is self-sufficient. It loads its own compilation pipeline from the content-addressed cache, with fallback to the pre-compiled seeds in `boot/`. It defines `analyze()` (source to AST), `compile_to_blob()` (AST to binary blob), and `use_core()` for loading core modules. It creates the actor runtime and loads shop.cm via `use_core('internal/shop')`.
### Shop
**shop.cm** receives its dependencies through the module environment — `analyze`, `run_ast_fn`, `use_cache`, `shop_path`, `runtime_env`, `content_hash`, `cache_path`, and others. It defines `Shop.use()`, which is the function behind every `use()` call in user code.
### Cache invalidation
All caching is content-addressed by BLAKE2 hash of the source. When any source file changes, its hash changes and the old cache entry is simply never looked up again. No manual invalidation is needed. To force a full rebuild, delete `~/.cell/build/`.
## Module Resolution
When `use('path')` is called from a package context, the shop resolves the module through a multi-layer search. Both the `.cm` script file and C symbol are resolved independently, and the one with the narrowest scope wins.
### Resolution Order
For a call like `use('sprite')` from package `myapp`:
1. **Own package**`~/.cell/packages/myapp/sprite.cm` and C symbol `js_myapp_sprite_use`
2. **Aliased dependencies** — if `myapp/cell.toml` has `renderer = "gitea.pockle.world/john/renderer"`, checks `renderer/sprite.cm` and its C symbols
3. **Core** — built-in core modules and internal C symbols
For calls without a package context (from core modules), only core is searched.
### Private Modules
Paths starting with `internal/` are private to their package:
```javascript
use('internal/helpers') // OK from within the same package
// Cannot be accessed from other packages
```
### Explicit Package Imports
Paths containing a dot in the first component are treated as explicit package references:
```javascript
use('gitea.pockle.world/john/renderer/sprite')
// Resolves directly to the renderer package's sprite.cm
```
## Compilation and Caching
Every module goes through a content-addressed caching pipeline. Cache keys are based on the inputs that affect the output artifact, so changing any relevant input automatically invalidates the cache.
### Cache Hierarchy
When loading a module, the shop checks (in order):
1. **In-memory cache**`use_cache[key]`, checked first on every `use()` call
2. **Build-cache dylib** — content-addressed `.dylib` in `~/.cell/build/<hash>`, found via manifest (see [Dylib Manifests](#dylib-manifests))
3. **Cached bytecode** — content-addressed in `~/.cell/build/<hash>` (no extension)
4. **Internal symbols** — statically linked into the `cell` binary (fat builds)
5. **Source compilation** — full pipeline: analyze, mcode, streamline, serialize
Dylib resolution wins over internal symbols, so a built dylib can hot-patch a fat binary. Results from compilation are cached back to the content-addressed store for future loads.
Each loading method (except the in-memory cache) can be individually enabled or disabled via `shop.toml` policy flags — see [Shop Configuration](#shop-configuration) below.
### Content-Addressed Store
The build cache at `~/.cell/build/` stores all compiled artifacts named by the BLAKE2 hash of their inputs:
```
~/.cell/build/
├── a1b2c3d4... # cached bytecode blob, object file, dylib, or manifest (no extension)
└── ...
```
Every artifact type uses a unique salt appended to the content before hashing, so collisions between different artifact types are impossible:
| Salt | Artifact |
|------|----------|
| `obj` | compiled C object file |
| `dylib` | linked dynamic library |
| `native` | native-compiled .cm dylib |
| `mach` | mach bytecode blob |
| `mcode` | mcode IR (JSON) |
| `deps` | cached `cc -MM` dependency list |
| `fail` | cached compilation failure marker |
| `manifest` | package dylib manifest (JSON) |
This scheme provides automatic cache invalidation: when source changes, its hash changes, and the old cache entry is simply never looked up again.
### Failure Caching
When a C file fails to compile (missing SDK headers, syntax errors, etc.), the build system writes a failure marker to the cache using the `fail` salt. On subsequent builds, the failure marker is found and the file is skipped immediately — no time wasted retrying files that can't compile. The failure marker is keyed on the same content as the compilation (command string + source content), so if the source changes or compiler flags change, the failure is automatically invalidated and compilation is retried.
### Dylib Manifests
Dylibs live at content-addressed paths (`~/.cell/build/<hash>`) that can only be computed by running the full build pipeline. To allow the runtime to find pre-built dylibs without invoking the build module, `cell build` writes a **manifest** for each package. The manifest is a JSON file mapping each C module to its `{file, symbol, dylib}` entry. The manifest path is itself content-addressed (BLAKE2 hash of the package name + `manifest` salt), so the runtime can compute it from the package name alone.
At runtime, when `use()` needs a C module from another package, the shop reads the manifest to find the dylib path. This means `cell build` must be run before C modules from packages can be loaded.
For native `.cm` dylibs, the cache content includes source, target, native mode marker, and sanitize flags, then uses the `native` salt. Changing any of those inputs produces a new cache path automatically.
### Core Module Caching
Core modules loaded via `use_core()` in engine.cm follow the same content-addressed pattern. On first use, a module is compiled from source and cached by the BLAKE2 hash of its source content. Subsequent loads with unchanged source hit the cache directly.
User scripts (`.ce` files) are also cached. The first run compiles and caches; subsequent runs with unchanged source load from cache.
## C Extension Resolution
C extensions are resolved alongside script modules. A C module is identified by a symbol name derived from the package and file name:
```
package: gitea.pockle.world/john/prosperon
file: sprite.c
symbol: js_gitea_pockle_world_john_prosperon_sprite_use
```
### C Resolution Sources
1. **Build-cache dylibs** — content-addressed dylibs in `~/.cell/build/<hash>`, found via per-package manifests written by `cell build`
2. **Internal symbols** — statically linked into the `cell` binary (fat builds)
Dylibs are checked first at each resolution scope, so a built dylib always wins over a statically linked symbol. This enables hot-patching fat binaries.
### Name Collisions
Having both a `.cm` script and a `.c` file with the same stem at the same scope is a **build error**. For example, `render.cm` and `render.c` in the same directory will fail. Use distinct names — e.g., `render.c` for the C implementation and `render_utils.cm` for the script wrapper.
## Environment Injection
When a module is loaded, the shop builds an `env` object that becomes the module's set of free variables. This includes:
- **Runtime functions** — `logical`, `some`, `every`, `starts_with`, `ends_with`, `is_actor`, `log`, `send`, `fallback`, `parallel`, `race`, `sequence`
- **Capability injections** — actor intrinsics like `$self`, `$delay`, `$start`, `$receiver`, `$fd`, etc.
- **`use` function** — scoped to the module's package context
The set of injected capabilities is controlled by `script_inject_for()`, which can be tuned per package or file.
## Shop Configuration
The shop reads an optional `shop.toml` file from the shop root (`~/.cell/shop.toml`). This file controls which loading methods are permitted through policy flags.
### Policy Flags
All flags default to `true`. Set a flag to `false` to disable that loading method.
```toml
[policy]
allow_dylib = true # per-file .dylib loading (requires dlopen)
allow_static = true # statically linked C symbols (fat builds)
allow_mach = true # pre-compiled .mach bytecode (lib/ and build cache)
allow_compile = true # on-the-fly source compilation
```
### Example Configurations
**Production lockdown** — only use pre-compiled artifacts, never compile from source:
```toml
[policy]
allow_compile = false
```
**Pure-script mode** — bytecode only, no native code:
```toml
[policy]
allow_dylib = false
allow_static = false
```
**No dlopen platforms** — static linking and bytecode only:
```toml
[policy]
allow_dylib = false
```
If `shop.toml` is missing or has no `[policy]` section, all methods are enabled (default behavior).
## Shop Directory Layout
```
~/.cell/
├── packages/ # installed packages (directories and symlinks)
│ └── core -> ... # symlink to the ƿit core
├── build/ # content-addressed cache (safe to delete anytime)
│ ├── <hash> # cached bytecode, object file, dylib, or manifest
│ └── ...
├── cache/ # downloaded package zip archives
├── lock.toml # installed package versions and commit hashes
├── link.toml # local development link overrides
└── shop.toml # optional shop configuration and policy flags
```
## Key Files
| File | Role |
|------|------|
| `internal/bootstrap.cm` | Minimal cache seeder (cold start only) |
| `internal/engine.cm` | Self-sufficient entry point: compilation pipeline, actor runtime, `use_core()` |
| `internal/shop.cm` | Module resolution, compilation, caching, C extension loading |
| `internal/os.c` | OS intrinsics: dylib ops, internal symbol lookup, embedded modules |
| `package.cm` | Package directory detection, alias resolution, file listing |
| `link.cm` | Development link management (link.toml read/write) |
| `boot/*.cm.mcode` | Pre-compiled pipeline seeds (tokenize, parse, fold, mcode, streamline, bootstrap) |

3
docs/spec/.pages Normal file
View File

@@ -0,0 +1,3 @@
nav:
- pipeline.md
- mcode.md

296
docs/spec/c-runtime.md Normal file
View File

@@ -0,0 +1,296 @@
---
title: "C Runtime for Native Code"
description: "Minimum C runtime surface for QBE-generated native code"
---
## Overview
QBE-generated native code calls into a C runtime for anything that touches the heap, dispatches dynamically, or requires GC awareness. The design principle: **native code handles control flow and integer math directly; everything else is a runtime call.**
This document defines the runtime boundary — what must be in C, what QBE handles inline, and how to organize the C code to serve both the mcode interpreter and native code cleanly.
## The Boundary
### What native code does inline (no C calls)
These operations compile to straight QBE instructions with no runtime involvement:
- **Integer arithmetic**: `add`, `sub`, `mul` on NaN-boxed ints (shift right 1, operate, shift left 1)
- **Integer comparisons**: extract int with shift, compare, produce tagged bool
- **Control flow**: jumps, branches, labels, function entry/exit
- **Slot access**: load/store to frame slots via `%fp` + offset
- **NaN-box tagging**: integer tagging (`n << 1`), bool constants (`0x03`/`0x23`), null (`0x07`)
- **Type tests**: `JS_IsInt` (LSB check), `JS_IsNumber`, `JS_IsText`, `JS_IsNull` — these are bit tests on the value, no heap access needed
### What requires a C call
Anything that:
1. **Allocates** (arrays, records, strings, frames, function objects)
2. **Touches the heap** (property get/set, array indexing, closure access)
3. **Dispatches on type at runtime** (dynamic load/store, polymorphic arithmetic)
4. **Calls user functions** (frame setup, argument passing, invocation)
5. **Does string operations** (concatenation, comparison, conversion)
## Runtime Functions
### Tier 1: Essential (must exist for any program to run)
These are called by virtually every QBE program.
#### Intrinsic Lookup
```c
// Look up a built-in function by name. Called once per intrinsic per callsite.
JSValue cell_rt_get_intrinsic(JSContext *ctx, const char *name);
```
Maps name → C function pointer wrapped in JSValue. This is the primary entry point for all built-in functions (`print`, `text`, `length`, `is_array`, etc). The native code never calls intrinsics directly — it always goes through `get_intrinsic``frame``invoke`.
#### Function Calls
```c
// Allocate a call frame with space for nr_args arguments.
JSValue cell_rt_frame(JSContext *ctx, JSValue fn, int nr_args);
// Set argument idx in the frame.
void cell_rt_setarg(JSValue frame, int idx, JSValue val);
// Execute the function. Returns the result.
JSValue cell_rt_invoke(JSContext *ctx, JSValue frame);
```
This is the universal calling convention. Every function call — user functions, intrinsics, methods — goes through frame/setarg/invoke. The frame allocates a `JSFrameRegister` on the GC heap, setarg fills slots, invoke dispatches.
**Tail call variants:**
```c
JSValue cell_rt_goframe(JSContext *ctx, JSValue fn, int nr_args);
void cell_rt_goinvoke(JSContext *ctx, JSValue frame);
```
Same as frame/invoke but reuse the caller's stack position.
### Tier 2: Property Access (needed by any program using records or arrays)
```c
// Record field by constant name.
JSValue cell_rt_load_field(JSContext *ctx, JSValue obj, const char *name);
void cell_rt_store_field(JSContext *ctx, JSValue obj, JSValue val, const char *name);
// Array element by integer index.
JSValue cell_rt_load_index(JSContext *ctx, JSValue obj, JSValue idx);
void cell_rt_store_index(JSContext *ctx, JSValue obj, JSValue idx, JSValue val);
// Dynamic — type of key unknown at compile time.
JSValue cell_rt_load_dynamic(JSContext *ctx, JSValue obj, JSValue key);
void cell_rt_store_dynamic(JSContext *ctx, JSValue obj, JSValue key, JSValue val);
```
The typed variants (`load_field`/`load_index`) skip the key-type dispatch that `load_dynamic` must do. When parse and fold provide type information, QBE emit selects the typed variant and the streamline optimizer can narrow dynamic → typed.
**Implementation**: These are thin wrappers around existing `JS_GetPropertyStr`/`JS_GetPropertyNumber`/`JS_GetProperty` and their `Set` counterparts.
### Tier 3: Closures (needed by programs with nested functions)
```c
// Walk depth levels up the frame chain, read slot.
JSValue cell_rt_get_closure(JSContext *ctx, JSValue fp, int depth, int slot);
// Walk depth levels up, write slot.
void cell_rt_put_closure(JSContext *ctx, JSValue fp, JSValue val, int depth, int slot);
```
Closure variables live in outer frames. `depth` is how many `caller` links to follow; `slot` is the register index in that frame.
### Tier 4: Object Construction (needed by programs creating arrays/records/functions)
```c
// Create a function object from a compiled function index.
// The native code loader must maintain a function table.
JSValue cell_rt_make_function(JSContext *ctx, int fn_id);
```
Array and record literals are currently compiled as intrinsic calls (`array(...)`, direct `{...}` construction) which go through the frame/invoke path. A future optimization could add:
```c
// Fast paths (optional, not yet needed)
JSValue cell_rt_new_array(JSContext *ctx, int len);
JSValue cell_rt_new_record(JSContext *ctx);
```
### Tier 5: Collection Operations
```c
// a[] = val (push) and var v = a[] (pop)
void cell_rt_push(JSContext *ctx, JSValue arr, JSValue val);
JSValue cell_rt_pop(JSContext *ctx, JSValue arr);
```
### Tier 6: Error Handling
```c
// Trigger disruption. Jumps to the disrupt handler or unwinds.
void cell_rt_disrupt(JSContext *ctx);
```
### Tier 7: Miscellaneous
```c
JSValue cell_rt_delete(JSContext *ctx, JSValue obj, JSValue key);
JSValue cell_rt_typeof(JSContext *ctx, JSValue val);
```
### Tier 8: String and Float Helpers (called from QBE inline code, not from qbe_emit)
These are called from the QBE IL that `qbe.cm` generates inline for arithmetic and comparison operations. They're not `cell_rt_` prefixed — they're lower-level:
```c
// Float arithmetic (when operands aren't both ints)
JSValue qbe_float_add(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_float_sub(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_float_mul(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_float_div(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_float_mod(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_float_pow(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_float_neg(JSContext *ctx, JSValue v);
JSValue qbe_float_inc(JSContext *ctx, JSValue v);
JSValue qbe_float_dec(JSContext *ctx, JSValue v);
// Float comparison (returns C int 0/1 for QBE branching)
int qbe_float_cmp(JSContext *ctx, int op, JSValue a, JSValue b);
// Bitwise ops on non-int values (convert to int32 first)
JSValue qbe_bnot(JSContext *ctx, JSValue v);
JSValue qbe_bitwise_and(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_bitwise_or(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_bitwise_xor(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_shift_shl(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_shift_sar(JSContext *ctx, JSValue a, JSValue b);
JSValue qbe_shift_shr(JSContext *ctx, JSValue a, JSValue b);
// String operations
JSValue JS_ConcatString(JSContext *ctx, JSValue a, JSValue b);
int js_string_compare_value(JSContext *ctx, JSValue a, JSValue b, int eq_only);
JSValue JS_NewString(JSContext *ctx, const char *str);
JSValue __JS_NewFloat64(JSContext *ctx, double d);
int JS_ToBool(JSContext *ctx, JSValue v);
// String/number type tests (inline-able but currently calls)
int JS_IsText(JSValue v);
int JS_IsNumber(JSValue v);
// Tolerant equality (== on mixed types)
JSValue cell_rt_eq_tol(JSContext *ctx, JSValue a, JSValue b);
JSValue cell_rt_ne_tol(JSContext *ctx, JSValue a, JSValue b);
// Text ordering comparisons
JSValue cell_rt_lt_text(JSContext *ctx, JSValue a, JSValue b);
JSValue cell_rt_le_text(JSContext *ctx, JSValue a, JSValue b);
JSValue cell_rt_gt_text(JSContext *ctx, JSValue a, JSValue b);
JSValue cell_rt_ge_text(JSContext *ctx, JSValue a, JSValue b);
```
## What Exists vs What Needs Writing
### Already exists (in qbe_helpers.c)
All `qbe_float_*`, `qbe_bnot`, `qbe_bitwise_*`, `qbe_shift_*`, `qbe_to_bool` — these are implemented and working.
### Already exists (in runtime.c / quickjs.c) but not yet wrapped
The underlying operations exist but aren't exposed with the `cell_rt_` names:
| Runtime function | Underlying implementation |
|---|---|
| `cell_rt_load_field` | `JS_GetPropertyStr(ctx, obj, name)` |
| `cell_rt_load_index` | `JS_GetPropertyNumber(ctx, obj, JS_VALUE_GET_INT(idx))` |
| `cell_rt_load_dynamic` | `JS_GetProperty(ctx, obj, key)` |
| `cell_rt_store_field` | `JS_SetPropertyStr(ctx, obj, name, val)` |
| `cell_rt_store_index` | `JS_SetPropertyNumber(ctx, obj, JS_VALUE_GET_INT(idx), val)` |
| `cell_rt_store_dynamic` | `JS_SetProperty(ctx, obj, key, val)` |
| `cell_rt_delete` | `JS_DeleteProperty(ctx, obj, key)` |
| `cell_rt_push` | `JS_ArrayPush(ctx, &arr, val)` |
| `cell_rt_pop` | `JS_ArrayPop(ctx, arr)` |
| `cell_rt_typeof` | type tag switch → `JS_NewString` |
| `cell_rt_disrupt` | `JS_Throw(ctx, ...)` |
| `cell_rt_eq_tol` / `cell_rt_ne_tol` | comparison logic in mcode.c `eq_tol`/`ne_tol` handler |
| `cell_rt_lt_text` etc. | `js_string_compare_value` + wrap result |
### Needs new code
| Runtime function | What's needed |
|---|---|
| `cell_rt_get_intrinsic` | Look up intrinsic by name string, return JSValue function. Currently scattered across `js_cell_intrinsic_get` and the mcode handler. Needs a clean single entry point. |
| `cell_rt_frame` | Allocate `JSFrameRegister`, set function slot, set argc. Exists in mcode.c `frame` handler but not as a callable function. |
| `cell_rt_setarg` | Write to frame slot. Trivial: `frame->slots[idx + 1] = val` (slot 0 is `this`). |
| `cell_rt_invoke` | Call the function in the frame. Needs to dispatch: native C function vs mach bytecode vs mcode. This is the critical piece — it must handle all function types. |
| `cell_rt_goframe` / `cell_rt_goinvoke` | Tail call variants. Similar to frame/invoke but reuse caller frame. |
| `cell_rt_make_function` | Create function object from index. Needs a function table (populated by the native loader). |
| `cell_rt_get_closure` / `cell_rt_put_closure` | Walk frame chain. Exists inline in mcode.c `get`/`put` handlers. |
## Recommended C File Organization
```
source/
cell_runtime.c — NEW: all cell_rt_* functions (the native code API)
qbe_helpers.c — existing: float/bitwise/shift helpers for inline QBE
runtime.c — existing: JS_GetProperty, JS_SetProperty, etc.
quickjs.c — existing: core VM, GC, value representation
mcode.c — existing: mcode interpreter (can delegate to cell_runtime.c)
```
**`cell_runtime.c`** is the single file that defines the native code contract. It should:
1. Include `quickjs-internal.h` for access to value representation and heap types
2. Export all `cell_rt_*` functions with C linkage (no `static`)
3. Keep each function thin — delegate to existing `JS_*` functions where possible
4. Handle GC safety: after any allocation (frame, string, array), callers' frames may have moved
### Implementation Priority
**Phase 1** — Get "hello world" running natively:
- `cell_rt_get_intrinsic` (to find `print` and `text`)
- `cell_rt_frame`, `cell_rt_setarg`, `cell_rt_invoke` (to call them)
- A loader that takes QBE output → assembles → links → calls `cell_main`
**Phase 2** — Variables and arithmetic:
- All property access (`load_field`, `load_index`, `store_*`, `load_dynamic`)
- `cell_rt_make_function`, `cell_rt_get_closure`, `cell_rt_put_closure`
**Phase 3** — Full language:
- `cell_rt_push`, `cell_rt_pop`, `cell_rt_delete`, `cell_rt_typeof`
- `cell_rt_disrupt`
- `cell_rt_goframe`, `cell_rt_goinvoke`
- Text comparison wrappers (`cell_rt_lt_text`, etc.)
- Tolerant equality (`cell_rt_eq_tol`, `cell_rt_ne_tol`)
## Calling Convention
All `cell_rt_*` functions follow the same pattern:
- First argument is always `JSContext *ctx`
- Values are passed/returned as `JSValue` (64-bit, by value)
- Frame pointers are `JSValue` (tagged pointer to `JSFrameRegister`)
- String names are `const char *` (pointer to data section label)
- Integer constants (slot indices, arg counts) are `int` / `long`
Native code maintains `%ctx` (JSContext) and `%fp` (current frame pointer) as persistent values across the function body. All slot reads/writes go through `%fp` + offset.
## What Should NOT Be in the C Runtime
These are handled entirely by QBE-generated code:
- **Integer arithmetic and comparisons** — bit operations on NaN-boxed values
- **Control flow** — branches, loops, labels, jumps
- **Boolean logic** — `and`/`or`/`not` on tagged values
- **Constant loading** — integer constants are immediate, strings are data labels
- **Type guard branches** — the `is_int`/`is_text`/`is_null` checks are inline bit tests; the branch to the float or text path is just a QBE `jnz`
The `qbe.cm` macros already handle all of this. The arithmetic path looks like:
```
check both ints? → yes → inline int add → done
→ no → call qbe_float_add (or JS_ConcatString for text)
```
The C runtime is only called on the slow paths (float, text, dynamic dispatch). The fast path (integer arithmetic, comparisons, branching) is fully native.

Some files were not shown because too many files have changed in this diff Show More