GnuPG is a tricky and finicky piece of software. While learning it, I had to find out a lot of its peculiarities.
It heavily depends on the gnupg directory, where it keeps the address book.
Example:
gpg2 --yes --batch --trust-model always --armor --recipient 'security@slackware.com'
The --yes --batch --trust-model always
is needed for demo purposes, so that no questions are asked.
--armor
is needed, because output is normally binary, and we need to encode it.
As opposed to whatever you might read in the manuals, cryptography is not really about algorithms as it is about people.
When working with gnupg, you will be dealing with people all the time, looking at them, trusting or dis-trusing them, looking at their connections, et cetera.
Keys themselves are huge (4096 bytes are, like 2 pages of text), you don’t want to manage them yourself. But each of them has a fingerprint, which is used to identify it more or less uniquely.
It is not documented anywhere, but now I am telling you.
gpg2 --no-verbose -q --recv-keys '6A4463C040102233' 2>&1 echo gpg2 --no-verbose -q --recv-keys '0368EF579C7BA3B6' 2>&1
gpg: enabled debug flags: memstat gpg: data source: http://pgp.surf.nl:11371 gpg: armor header: Comment: Hostname: pgp.surf.nl gpg: armor header: Version: Hockeypuck 2.1.2 gpg: key 6A4463C040102233: number of dropped non-self-signatures: 231 gpg: pub dsa1024/6A4463C040102233 2003-02-26 Slackware Linux Projectgpg: removing signature from key 6A4463C040102233 on user ID "Slackware Linux Project ": signature superseded gpg: removing signature from key 6A4463C040102233 on user ID "Slackware Linux Project ": signature superseded gpg: key 6A4463C040102233: 1 duplicate signature removed gpg: key 6A4463C040102233: "Slackware Linux Project " not changed gpg: Total number processed: 1 gpg: unchanged: 1 gpg: keydb: handles=2 locks=1 parse=0 get=2 gpg: build=0 update=0 insert=0 delete=0 gpg: reset=0 found=2 not=0 cache=0 not=0 gpg: kid_not_found_cache: count=0 peak=0 flushes=0 gpg: sig_cache: total=20 cached=14 good=14 bad=0 gpg: random usage: poolsize=600 mixed=0 polls=0/0 added=0/0 outmix=0 getlvl1=0/0 getlvl2=0/0 gpg: rndjent stat: collector=0x0000000000000000 calls=0 bytes=0 gpg: secmem usage: 0/32768 bytes in 0 blocks gpg: enabled debug flags: memstat gpg: data source: http://pgp.surf.nl:11371 gpg: armor header: Comment: Hostname: pgp.surf.nl gpg: armor header: Version: Hockeypuck 2.1.2 gpg: key 0368EF579C7BA3B6: number of dropped non-self-signatures: 14 gpg: pub dsa1024/0368EF579C7BA3B6 2007-01-27 SlackBuilds.org Development Team gpg: key 0368EF579C7BA3B6: "SlackBuilds.org Development Team " not changed gpg: Total number processed: 1 gpg: unchanged: 1 gpg: keydb: handles=2 locks=1 parse=0 get=2 gpg: build=0 update=0 insert=0 delete=0 gpg: reset=0 found=2 not=0 cache=0 not=0 gpg: kid_not_found_cache: count=0 peak=0 flushes=0 gpg: sig_cache: total=9 cached=7 good=7 bad=0 gpg: random usage: poolsize=600 mixed=0 polls=0/0 added=0/0 outmix=0 getlvl1=0/0 getlvl2=0/0 gpg: rndjent stat: collector=0x0000000000000000 calls=0 bytes=0 gpg: secmem usage: 0/32768 bytes in 0 blocks
This is not obvious, but this is true.
Test message 0003 +++.
Encrypt with one key:
gpg2 --yes --batch --armor --trust-model always --no-default-recipient --recipient 'security@slackware.com' --encrypt
-----BEGIN PGP MESSAGE----- hQEOA3aHN/lOUjVpEAP+NsqQ/tY2+EwNeYX2TV1RHMnQB06uEbI24uKIAqyb6x7M v328C+z42Fbbyf82/yb7SuJvwnjLJZ0PcskTWQmo7009yi7OBnddkfpCAzSN67zD ijps3oTtBn2gXHJhgdGK+x9hpTDAMAPC5UDm6CJIfH5/GbATxNOn5InWKk4TWOwD /iivsu+Cw7/qiS5f2CY2PXRtdINHsNew/sCkRAtQPeFraNS2BCF4unuIPBeUM0F9 U5XBASzq1L+leFzv2FTrc4xk/4lThUFEBo7+XPR3A85rnpqAMqDiN5OVfHnrNVRt dUfyi2zPddienWPuMNEI37eY4YejmWoH4CNyR3LQDZ9w0jsBb5oOBwc6Mi9pWGxz lRs27RScTFqoQg/WkHVuoFTMkc3DBgj7TeUvViEf45Q1mRJaYgavMuXXuhB8EQ== =biKf -----END PGP MESSAGE-----
There is an empty line at the top, but not at the bottom? WTF?
wc -c wc -c <nil EOF
0 517
Size hugely differs!
HUGELY UNOBVIOUS
The following example has the same cleartext, but the cipertext will be different.
sleep 1; gpg2 --yes --batch --armor --trust-model always --recipient 'security@slackware.com' --encrypt
-----BEGIN PGP MESSAGE----- hQEOA3aHN/lOUjVpEAP/RUxAlhkW8PrJ4ULAeM9Po9x9vQtKUfdpbktt7Fd5CvQx 1PpXNLSPQcu3RmMWExlSOdAc89u4oeJsleR18jQuwUTfuqC5k/zzbsxHitEeoWSW NVL2g+g0qeh20XOPe/otVPA4Ho/NjX4WwNy9NVgZzMPw90R4/cAhCcikvPHzEhEE AL1cKFklENuPegkFmIe+3FifaokEAe8t2Mkk9jd5uBMVaTmnlXzjhloXopscpCsD yKAcEdfsP2rvvIXhssxAvPAqauWyYpgULe64AdxeniutjyTyRQxvExmWw1HQRmZZ XEgptlTFpL53ZHOvoaN9TxfqGP5BP8ylDZdplxBUCVZw0lIB48yndY7xChfeebbB ed1Nmtd9TI5KOZHUz018M/+adXzu8agHHRoxmI1rwMi8akzPVLl36af1DN3lzfCT 6RF17GXgpewPHqbiK6miiB/7AaSb =nbTL -----END PGP MESSAGE-----
diff --report-identical-files --side-by-side <(cat <nil EOF ) <(cat < nil EOF ) exit 0
-----BEGIN PGP MESSAGE----- -----BEGIN PGP MESSAGE----- hQEOA3aHN/lOUjVpEAP/d9lUngV9EVFh9yGRmijaY0xgbiRxC96FVw11yosiO | hQEOA3aHN/lOUjVpEAQAkzjNExyYNSqrM152b7npJzo20WXbBKDYEZ8kvoBwu +LUFcJoKVl7at+OA7YFQepNoGlrjaMmM64+1mPZjpVau6ydfSHPR41TOPoJVV | jmoaiJmFPLU2krlDzT8tr5nrNYU9TXM2wx0A1geM59Xp6T7HzSVjlehaGLTga 8dG2eM+cBY+disXlYHHD6KX9R1hzKa0bhrzwmbqbRLxRTitb7Z9MurUJcXVDj | 0bo6L7/m3guF86kd7+NTsbLeogQ6Ra6ckOLrtW5wbR25kfEub2QkA6tJli7E4 /2t0oSFJ5V4Btz7rUIHi8bwa9mlVeZgkwVIAnanvYGAgjaw0JJinqtyTe+Cb/ | /iQe101Tuaq93roc2Ar1QijUSNEDZgv0m/7xQ6lWnT0APegGC843LPiIhXfEe k1bjgfPVhKmztr3msZrOy8xUJlFekZVW5KNGD4vgwTZSqEW37Q0bQ3MYiR4sI | ozGi23quj3oNJpHSMeQvzxGw5+K+CkRuWnqT43nfpGOVMAsfed/DRbVjQuQs5 RB2XZHoXijDzJNHRx+tfCk5nhZ9zLgfWkDOQxneSF+MB0lIBbNw6fp9QcTZSb | vjoMG+Sw8md35lsvWsTv+dErBv/UtldMwxLNzTXX+Vwi0lIBo32DixsV7wx4f RC1orzZAABW5rzzwLpipT7uQ5ISGGxtBtZhFr6uZX8GHl8Gqk0pLpMEQkbBP4 | B/Rj7VoWYsOgLLR0xXO48L7eQVWgYy3sOygBZwgcfDs3a1ihIlrBYXqMre55F n1BWvFbe2G7t/9PRyCLAbiqnVHDm | a7W3U2gV4YnjS8CTaOjzB09ZCsAw =Cjf8 | =iXjK -----END PGP MESSAGE----- -----END PGP MESSAGE-----
They are similar up to the character 18, but not further.
read -d '' cleartext diff -y \ <(printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --hidden-recipient 'security@slackware.com' --encrypt) \ <(printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --recipient 'security@slackware.com' --encrypt) printf "\nVisible recpipent\n" printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --no-default-recipient --recipient 'security@slackware.com' --encrypt | gpg2 --pinentry-mode cancel --list-packets 2>&1 | grep -F 'encrypted with' printf "\nHidden recpipent\n" printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --no-default-recipient --hidden-recipient 'security@slackware.com' --encrypt | gpg2 --pinentry-mode cancel --list-packets 2>&1 | grep -F 'encrypted with'
-----BEGIN PGP MESSAGE----- -----BEGIN PGP MESSAGE----- hQEOAwAAAAAAAAAAEAQAx07fFUO7g5qBUJthUCyfa3L1PguSLxGAe8AlBhZbe | hQEOA3aHN/lOUjVpEAP8COSescSPFwZHrmYNyMy6XjIa8vN0bbBj50QzhWGLC PqQ6E8/sTFwd2ix6elUPolsJ9fFQ7rU0x11rBJHH7i6uBDWzasdkp8Rn3w/Ss | OtBLMcGjitCrwRiiEsDvb70YkC1W4ujv9PcKnMPlk/GZeuCiJqXXivMaNvRV1 NL77DExogIFe9HqMzJzsELiheYMVN76XAlkr7kCS7H2Fd7TqC4ZXqlo9Eg+r4 | okIFYgvy845eZ1VV//CJXTDTUm7YKbxisRptAWJISb1QydiuYrmh0BEs2cUgH ALPL/30E+GXF+poTXOCXI95cdgHGiRWZyxQWFQeduiHKNOkBfe53YPCwO7yHl | /2Qr9kqo0iBrGF8TUGZKK0rbiAQSis1yC/dGNfCkI/nmcHWzeEE/fsOg+IZaJ NS/wDQH9lE3zta+9uYYG2U3/ktjRye2JOQ92tPC5TjXpq9fKA9+PbWP5K4KoL | NGWPi4H8AKvvStJrvnNctvFd5eRkDihzNWjbN15QQhCz/3qY12GxRxhTuYv9+ c7bqUM16d5FQlFSH/yimAGb9gRMQFE8SpawSv92XrY1P0lEBBsZ8z9ZtsigbZ | X35LzHc72yVt3Yu74XgK8QZZP4vwsE6Lapm6yEmJX1Ex0lEBXvvffz3N433Hc gM129ABQc1gHRzspmm4LCbFzJdLgpq5sXZ5UygZcRfD3wmdgmeuJB6xylbe7r | w2aTs6+cYiQHMCX6DaFACpNuIj9CGNSJ3/Yyw56vCpMYG5aHVs6gH1lHFokla 803nE4RSPuR52I8B+UZMh+2F240= | 77J8ny9fW1MkOyi7MxdZone667k= =iREK | =xg18 -----END PGP MESSAGE----- -----END PGP MESSAGE----- Visible recpipent gpg: encrypted with 1024-bit ELG key, ID 768737F94E523569, created 2003-02-26 Hidden recpipent gpg: encrypted with ELG key, ID 0000000000000000
HIDING RECIPIENT IS NOT (!!!) THE DEFAULT SETTING. The interceptor WILL KNOW who you are writing to by default.
Now try with different recipients:
read -d '' cleartext diff -y \ <(printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --recipient 'security@slackware.com' --encrypt) \ <(printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --recipient 'security@slackware.com' --recipient 'slackbuilds-devel@slackbuilds.org' --encrypt) printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --recipient 'security@slackware.com' --encrypt | wc -c printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --recipient 'slackbuilds-devel@slackbuilds.org' --encrypt | wc -c printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --recipient 'security@slackware.com' --recipient 'slackbuilds-devel@slackbuilds.org' --encrypt | wc -c printf "\nAlso metadata for multiple recipients:\n" printf "%s" "$cleartext" | gpg2 --yes --batch --armor --trust-model always --recipient 'security@slackware.com' --recipient 'slackbuilds-devel@slackbuilds.org' --encrypt | gpg2 --list-packets 2>&1 | grep -F 'encrypted with'
-----BEGIN PGP MESSAGE----- -----BEGIN PGP MESSAGE----- hQEOA3aHN/lOUjVpEAQAwSYL8+1VxzbgDuH4O+3APLww1/YRPwVT1fMwcJaJn | hQEOA3aHN/lOUjVpEAP+NOgmHMV3+7lJNqBPzau3Jk36HQAMTToo8UUaEoIsK ATZQzEGEPhG30ffWXPhC+fYHIbVLo0dEtkD+2K8Q1YYpvLWf8gRJPs5aFal3G | neHzSnrQruLmfQDKDrcx3L7gA40ML7V4lYJ8Gr+xBKgkInk5veHyEQEO28cct JB8DDO0otSLFpBa4qIkKBajwwxnQy5lsyz5+nHhynWsAqgYGbjpXuryPnAKvY | WVrde9CaRQAU7rQLrx+yBclHTtjFo5Zz0bkk6G5kbAtb9GVdV2M2LYNHElzdt +waeFOL4gykwxQrcJMk5bcsTntXErVRR+lDFhklv0d3/M62hbG3RyvL4yq/Ir | AMnS0UDUSc6OiJXJFvc2wTeUmBIW4zaWDDHfPbZ9HOMK4FlDnLwZ8E5q3QA5c HdjAlcrGyUnKN7f/sFjZghRCRV7B/GugruC5vD45Nc030De5o1yKK30uSqQ85 | AYGs2qba/FhCddbJlQkrppxijR/KW6OB2He6QqlpIT5rMtTmff9SumPetztJx 8iqfThVldyWiYJl+iyIwFikhv3VSKXIoMBJRDzqLR5qE0jsBmnKyfX+iOOYS5 | NTUO2unAEfl2sR4qD2hdgMxKo7PLM37PuUyBgn+Yx7Q3hQIOA5HkZF/FddSbE +u6WQqtxwsYoRFM11AwgaWN7cpnWkIlSFgyPxa6xN3zt3q4x8nZPmNNeHIhKd | XiVVNr6L5j1+Zo7HaBjh6RYAJ2Cx6zzFsP0dVyrZ3Z+2U600nE7Ttt2Bkwgjf =8HI6 | 8QNDglJewjR4nd0LKg7A55lVNpGWM4l7tZ54WPagn8UOJtFgxNVliupjH5G3Y > qrFmOGxoQoIuwPKQsZm5gekFXJraHLuOWKjH4+I5FeeJErnkJGKu3m2Ay+Jo+ > 2gi81VmAYwfpJmXlankXkATAFRMk/i9mfkQPFkrE+VVei/vyxT99alInghOQm > lW+1CaJwhsrWLx6qXTceAe5aNE9d3XPfDu90pEHxENWkQGffuK4BQyYGy8l7X > Kx8fS25cS22vsriYpV6hxAf/arg5Qv7Hy6GCh9y+bNBfzGX214Jh8yjVLUtu/ > 6Ivh/NvOjpf5QVlebmsur36KxZy/n/y/OQkjTslB47gdm5twdzty81O3wYUNO > YzhOCzCldClUTVygU89ekB3BvRzi17lVoNOstIz6CGVSPIEUWiYpHOwNdjBoo > mftI6SVxFYiJPeiVCL3mweE/YKx9zyVhJYKHSz4L2fKfDF5vLTymsV0zyCsUG > 6azAXXn3O4LsAofz+sjPJvbN3vzk0PSOYAtYSdyJOUIFt9Q/Xu7xuwr8hX4a5 > uYPRymgFNtbmNAxrgQD/NoYJvYd0CEpebv91NBaYrae9SNI7AbhlkXwc12bcF > uMq1d1MhktDJrt7zC5Cd0ATLEJ4JfUFsznudgaUEafAHrltnMSS9jYoKhynzf > =yTHX -----END PGP MESSAGE----- -----END PGP MESSAGE----- 516 862 1231 Also metadata for multiple recipients: gpg: encrypted with 2048-bit ELG key, ID 0x91E4645FC575D49B, created 2007-01-27 gpg: encrypted with 1024-bit ELG key, ID 0x768737F94E523569, created 2003-02-26
Two-recipient message is NOT two ciphertexts concatenated. Why?
sig {,1,2,3}
is trust level:
gpg2 --list-sigs 'slackbuilds-devel@slackbuilds.org'
pub dsa1024/0x0368EF579C7BA3B6 2007-01-27 [SC] Key fingerprint = D307 6BC3 E783 EE74 7F09 B8B7 0368 EF57 9C7B A3B6 Keygrip = F9231F44B9E53FAC422A0B8D69FAC7D94F824BB1 uid [ unknown] SlackBuilds.org Development Teamsig 0xF1D5979976B20C2C 2007-01-27 [User ID not found] sig 0x5E56AAAFA75CBDA0 2007-01-27 Eric Hameleers sig 0x57DB2CB7EABADD7B 2007-01-27 [User ID not found] sig 3 0xED03EF40D0E52F04 2007-01-27 [User ID not found] sig 3 0x8D01BA7CBD9A880E 2007-01-27 [User ID not found] sig 3 0x0368EF579C7BA3B6 2007-01-27 SlackBuilds.org Development Team sig 0x151BC8BDF48D71EA 2007-02-08 [User ID not found] sig 0xB44A343FA8F23B66 2008-11-22 [User ID not found] sig 0x72C395892C5402BF 2009-01-05 [User ID not found] sig 0x6A4463C040102233 2013-03-12 Slackware Linux Project sig 0xE8D8E103E906E998 2017-04-06 [User ID not found] sig 0x883EC63B769EE011 2016-11-10 [User ID not found] sig X 0x78C2DF2D1A170CC6 2016-07-12 [User ID not found] sub elg2048/0x91E4645FC575D49B 2007-01-27 [E] Key fingerprint = 2415 DC27 B7A0 F5D6 5806 E4C4 91E4 645F C575 D49B Keygrip = A67D366751302FE14888E348BB2634D409EC71F4 sig 0x0368EF579C7BA3B6 2007-01-27 SlackBuilds.org Development Team
https://rollenspiel.social/@ArneBab/110294093784538500
C-s key tab C-space C-
M-x epa-import-keys-region PGP with mu4e just works.
Sadly it’s not the case for every GnuPG client.
You don’t!
Enemies MAY send emails pretending to be your.name@gmail.com, and you cannot do anything against it, if your friends don’t verify signatures. (And they do not.)
You can’t!
Moreover, a terrorist can ceritify your key and upload it to the server, and somebody can claim that you talked to a terrorist!
So, the protection against this is only social. You MUST memorise the phrase “anybody can certify my key and upload it to the server”.
Moreover, most people keep their contacts in Google Contacts anyway, so Google does know who you know. Also, most people won’t bother uploading your key anywhere.
The command below should work if you have at least one signed message.
Depending on your settings ( auto-key-locate mechanisms
, auto-key-import
, and auto-key-retrieve
), it will either verify the signature as true or fake, or complain that it has no key.
mu verify --verbose "$(mu find -f l flag:signed | tail -n 1)"
auto-key-locate mechanisms
, auto-key-import
, and auto-key-retrieve
)?This is very confusing.
So, locate
is for encryption, and on sending.
I suggest refraining from encryption until you are fully comfortable with signing, because you risk permanently losing your messages.
If you are fine with using some centralised way to find your friend’s pubkey, you can set that locate
to some method, just make sure that you understand what it does.
import
is for the cases when a key is attached to the message (yes, you can do that, see --include-key-block
).
retrieve
is for the cases when the key is not attached to the message (yes, you can do that).
Then gnupg will look at the server for the available key.
( setf mm-verify-option 'always) ( cl-pushnew "multipart/signed" gnus-buttonized-mime-types)
For details, see gnus#Security
mm is the gnus message viewer, reused by mu4e
mm-verify-option
is always unsure about the signatureBecause you don’t have a key?
You may want to set gnupg to fetch the key automatically (see above) 1.15 .
You may also want to import a key manually. I have not yet found a way to do it with mm/gnus (probably there is a way), but you can use an improvised function.
( defun mu4e-view-snarf-pgp-key ( &optional msg) "Snarf the pgp key for the specified message." ( interactive) ( let* ((msg ( or msg (mu4e-message-at-point))) (path (mu4e-message-field msg :path)) (cmd (format "%s verify --verbose %s" mu4e-mu-binary (shell-quote-argument path))) (output (shell-command-to-string cmd))) (message "mu4e-view-snarf-pgp-key" ":msg=" msg ":path=" path ":cmd=" cmd ":output=" output) ( let ((case-fold-search nil) (index 0)) ( while (string-match "finger-print[[:space:]]*: \\ ( [A-F0-9]+ \\ ) " output index) ( let* ((cmd (format "%s --recv %s" epg-gpg-program (match-string 1 output))) (output (shell-command-to-string cmd))) ( setf index (match-end 0)) (message output))))))
(Malory=active, Eve=passive/evesdropping)
If an evil Malory steals your Friend’s key Gmail password, but not key password, they won’t send you a broken signature, they will just send you an unsigned message.
Gmail works over TLS, so an evil Eve won’t be able to MITM with the message over Friend-Gmail link without the Friend noticing.
If the Friend is stupid, they will neglect the TLS warning, and Malory may mangle the message, and the signature will be broken.
Gmail might deliver your Friend’s message over unsafe link, or Malory might have cracked Gmail, and the signature will be broken.
You are probably also not an idiot, and only connect to your mail server over TLS, so Eve cannot do anything, and you are also not an idiot and care about TLS warnings, so only if Malory has hacked LetsEncrypt or your server, the signature will be broken.
So really the only place where the signature can be realistically broken without TLS being broken is between mail servers.
This means that all of your email is probably garbage, and you need to call your friend and clarify what is going on, ideally via several means. (The best way is probably SIP operators from neutral countries, small enough so that nobody cares about them.)
You don’t, as mu itself does not send messages.
:hook (mu4e-compose-mode . ( lambda () (mml-secure-sign))) :config ( setf mm-sign-option 'guided)
This will add a marker which will tell mm/mml/gnus to sign automatically. Delete it if you do not actually want to sign the message.
If you do want to sign the message, try sending it, and mm will ask you for the key.
For me this signs with pgp/mime
, and I am fine with that.
There is (2024) an extension called “Mailvelope” ( https://www.mailvelope.com/). It is open-source, up to the GMail api key.
They claim that their key management is fully local (unless you choose to upload your pubkeys).
It can connect to gnupg, although this requires some gymnastics.
You cannot use it with a native Gmail app, well, unless you are some kind of Android disassembly ninja.
There are two methods, both bad.
This is not too bad if you can use K-9 Mail with Gmail.
Creates nice pgp/mime
messages and encrypts attachments, but works over IMAP, so your battery will suffer, and all the nice features of Gmail labels will be lost.
You will also be able to verify signatures.
Annoying to copy/paste, for both writing and reading. Does not encrypt/sign attachments unless you are a base64 ninja.
But keeps the original Gmail app and all its niceties.
gpg2 --sign --encrypt --recipient~/passwords.txt
This will create ~/passwords.txt.gpg
There are different pinentries, you probably want to rebuild pinentry with –enable-inside-emacs, but if you are running a GUI, and a window appears, don’t be surprised.
There are pinentries in gtk2, gtk3, and such.
--sign
? I can decrypt a file means that it is fine, right?No! Or, rather, it only guarantees that Malory has not seen your passwords. But Malory may have replaced the file, and you would still see it as encrypted, and be able to decrypt it.
Emacs will decrypt it automatically, potentially asking you for a passphrase with one of the pinentry
.
If you need to use a password from that file in some other software, encrypt that password alone, and use PassCmd, for example:
PassCmd "gpg2 --quiet --for-your-eyes-only --no-tty --decrypt ~/.password-store/mbsync/gmail.gpg"
Same will work in dumb console.
From those which have no pubring.gpg (new ones):
gpg2 --keyring path/to/pubring.kbx --export | gpg2 --import
From older ones, which have pubring.gpg:
gpg2 --import path/to/pubring.gpg
Okay, a keyserver is essentially a phonebook, what in computing is called a “directory”. The problem with a keyserver is that everyone can upload all kind of crap there, and the Web of Trust is not very reliable, so you want a more robust way of finding signatures.
I will use bernhard.reiter@intevation.de
, because it is used in a lot of examples on the Internet.
--locate-keys
This will query not just keyservers, but other methods too.
nodefault
there is important, replace wkd
with another method of your preference.
gpg2 --auto-key-locate clear,nodefault,wkd --locate-keys bernhard.reiter@intevation.de 2>&1
gpg: key 0x2B7BA3BF9BC3A554: "Bernhard Reiter" not changed gpg: Total number processed: 1 gpg: unchanged: 1 pub rsa3072/0x2B7BA3BF9BC3A554 2020-06-11 [SC] [expires: 2030-06-09] Key fingerprint = BDD9 57F9 C4FE 0FDC 583D CD6D 2B7B A3BF 9BC3 A554 Keygrip = 3B4CC1377231CE855D26B45CFF672B9331D11F7B uid [ unknown] Bernhard Reiter uid [ unknown] Bernhard Reiter uid [ unknown] Bernhard E. Reiter sub rsa3072/0x5B7528174D2908F8 2022-06-01 [E] [expires: 2024-06-12] Key fingerprint = 39B1 E207 2EB8 8FB6 4DAC 4F5C 5B75 2817 4D29 08F8 Keygrip = 3CC1BF56D4F7EBC76285DE6291077B14E960E335
If you put a key on an HTTPS webserver, you can be sure that it is as trustworthy as LetsEncrypt. Let us check this:
/usr/libexec/gpg-wks-client --verbose --check 'bernhard.reiter@intevation.de' 2>&1
gpg-wks-client: public key for 'bernhard.reiter@intevation.de' found via WKD gpg-wks-client: gpg2: gpg: Total number processed: 1 gpg-wks-client: fingerprint: BDD957F9C4FE0FDC583DCD6D2B7BA3BF9BC3A554 gpg-wks-client: user-id: Bernhard Reitergpg-wks-client: created: Thu 11 Jun 2020 16:24:59 CST gpg-wks-client: addr-spec: bernhard@intevation.de gpg-wks-client: user-id: Bernhard Reiter gpg-wks-client: created: Thu 11 Jun 2020 16:04:48 CST gpg-wks-client: addr-spec: bernhard@fsfe.org gpg-wks-client: user-id: Bernhard E. Reiter gpg-wks-client: created: Thu 11 Jun 2020 16:03:07 CST gpg-wks-client: addr-spec: bernhard.reiter@intevation.de
You can put your key on your own web server, so only people who know your server will be able to query your keys. GMail obviously won’t put your keys on their WKD, because they don’t like in-band security.
How to publish it?
Get the tricky url part:
gpg2 --with-wkd-hash -k bernhard.reiter@intevation.de | tail -n 3
gpg2 --with-wkd-hash -k bernhard.reiter@intevation.de > hacabazoakmnagxwmkjerb9yehuwehbm cp /var/www/htdoc/intevation.de/.well-known/openpgpkey/policy/hu/hacabazoakmnagxwmkjerb9yehuwehbm
CERT and PKA are two kinds of DNS records used for placing a key there.
This is like WKD, only you put the pubkey on DNS, not on HTTPS.
The nice part about it is that:
The bad part about it is that DNS is insecure. It is possible to sign DNS records with DNSSEC, but the DNSSEC does not encrypt the payload, only signs, and is very often misconfigured. And if you use DNS-over-HTTPS or DNS-over-TLS, you still end up trusting LetsEncrypt.
If you are interested, see here: https://www.gushi.org/make-dns-cert/howto.html
Practically speaking, since most people do not have their own email server, this is not really useful.
Use https://gitlab.com/nobodyinperson/thunar-custom-actions
and run uca-apply
.
Okay, so idea is the following.
Firstly, gpg really expects you to share your encryption key among your devices. You can, in principle, define different IDs for different devices, and encrypt for them independently, but it is not convenient at all.
So, if you need to revoke your encryption key, you are to revoke it everywhere, and update all your devices.
What subkeys are about, is about signature keys being fairly independent. This is how you do it:
key_id= 'Test (Test 001)' gpg2 --quick-generate-key "$key_id" "ed25519/cert+cv25519/encr" cfpr=$(gpg2 --list-secret-keys "$key_id" | grep 'Key fingerprint' | head -n 1 | cut -d '=' -f 2 | cut -c 2-) gpg2 --quick-add-key "$cfpr" cv25519 encr 1y efpr=$(gpg2 --list-secret-keys "$key_id" | grep 'Key fingerprint' | tail -n 1 | cut -d '=' -f 2 | cut -c 2-) gpg2 --quick-add-key "$cfpr" ed25519 sign 1y sfpr_laptop=$(gpg2 --list-secret-keys "$key_id" | grep 'Key fingerprint' | tail -n 1 | cut -d '=' -f 2 | cut -c 2-) gpg2 --quick-add-key "$cfpr" ed25519 sign 1y sfpr_mailvelope=$(gpg2 --list-secret-keys "$key_id" | grep 'Key fingerprint' | tail -n 1 | cut -d '=' -f 2 | cut -c 2-) gpg2 --quick-add-key "$cfpr" ed25519 sign 1y sfpr_phone=$(gpg2 --list-secret-keys "$key_id" | grep 'Key fingerprint' | tail -n 1 | cut -d '=' -f 2 | cut -c 2-) mkdir ~/.gnupg/secrtificate-backup/ gpg2 --output ~/.gnupg/secrtificate-backup/ "$(date --iso-local).${key_id}.gpg.asc" --armor --export-secret-keys "$key_id" mkdir ~/.gnupg/secrtificate-backup/ "$(date --iso-local).${key_id}.gpg.asc" gpg2 --delete-secret-keys "$key_id" # yes to first, and no to the rest gpg2 --armor --output ~/.gnupg/secrtificate-backup/ "$(date --iso-local).${key_id}.gpg.asc"/ec-2024-laptop.gpg.asc --export-secret-subkeys "${efpr}!" "${sfpr_laptop}!" gpg2 --armor --output ~/.gnupg/secrtificate-backup/ "$(date --iso-local).${key_id}.gpg.asc"/ec-2024-mailvelope.gpg.asc --export-secret-subkeys "${efpr}!" "${sfpr_mailvelope}!" gpg2 --armor --output ~/.gnupg/secrtificate-backup/ "$(date --iso-local).${key_id}.gpg.asc"/ec-2024-phone.gpg.asc --export-secret-subkeys "${efpr}!" "${sfpr_phone}!" gpg2 --delete-secret-keys "$key_id" # yes to all gpg2 --import ~/.gnupg/secrtificate-backup/ "$(date --iso-local).${key_id}.gpg.asc"/ec-2024-laptop.gpg.asc
Now you have your master key backed up, and one sign+encrypt pair on your laptop. Import the mailvelope and phone subcerts into mailvelope and phone (openkeychain) respectively.
You can backup the master key and delete file with it.
You most probably do not want to synchronise the master key.
If you only care about signing, you may have different keys everywhere, and if a device is stolen, revoke the key by a master key.
But for encryption… I don’t know.
Probably, you need to synchronise the key.
我读完威廉·吉布森的《神经漫游者》(也译《神经喚術士》)。 (我觉得《神经唤术士》是更好的翻译,因为毕竟带此名的主人公会作神经性的起死复生,而不是漫游。) 这本书是他的Sprawl三部曲的第一部,所以很有可能我看完第二部和第三部就会改变我的看法,可是我现在想要记录我的想法,因为已经积累太多了。
我为何决定在2024年阅读科幻作品?
青春的我读过很多奇幻小说作品。
但是,这件事情发生了:吉布森的作品没被列为到苏维埃90年代知识分子的经典奇幻书目。 此书目囊括太空歌剧、高端和战斗奇幻、世纪初的经典,比如Stanislaw Lem或Isaac Asimow,但是不囊括赛博朋克,除了Sergei Lukianenko的《幻影迷宫》。 本题材就会永远对我保持神秘,除非世界往另一个方向发展。
但是世界决定,相比于所有被20世纪幻想作家发明的开发其它范式来说, 它更喜欢赛博朋克. 因而,决定尽量快速和彻底地实现它。 1984年的时候,《神经唤术士》出版了,互联网的TCP/IP协议只存在2年(样板7年),而且只在美国军队和大学网运营。 网页,至少我们现在认识它们那样,当时尚未存在10年。 在我们今天适应的控制论属性中,那时为大众可使用的有:游戏机,录像机,最早最早的手提电话(体重800克)。 然而,虽然吉布森不太认识的刚刚出生的赛博空间,可是吉布森通过某些不可思议的方式创造了,到目前定义我们的世界开发方式的题材。
我很久之前开始过阅读《神经唤术士》(俄译),我当时刚刚进入大学读计算机学的博士,并且对构造数字社会感到兴奋。 但是我没成功读完:语言复杂,翻译差,而且博士论文工作进行的不太好。
现在我着手阅读原(英)文(好选择),然而世界快速的往极权主义地反乌托邦方向发展,所以没有期待享受阅读;而且成人和小孩的区别就是尽量快地完成有用的但是痛苦的任务,而不撑开疾苦。
又应该说,吉布森不仅预测了美丽新世界,还亲手写作了它。 在硅谷的电脑专家学界之中,此书被称为邪典,甚至成为参考书目,导致现在在写作我们的生活的人们,虽然不想直接的实现它,但是肯定受到它的影响。
为什么我要写“困难的”任务? 实际上,我没特别享受阅读《神经唤术士》。 书籍本身比较无聊,剧情老套,主人公又普通又寡淡,空有其表。 外观相比实质吸引更多注意力。 (此现象总的来说很属于赛博朋克题材的风格。) 固然,如果需要训练英文中的关于形式的词汇,本书很有用。 更不幸的是这个态度就是我们现代社会的态度,电脑丰富的环境只能更凸出的显示它的特点。
剧情基本上是这样:某些神秘的客户为了得到麦高芬录用了一队不同专业的专家。 每个专家各有所长,但也有自己的问题。 尽管吉布森尽量用独特的语言描述他们,可是我不能说他们的性格很有意思。
这本书还有什么值得注意的? 在这一部分中,我要列举我能看到的桥段。 该桥段常常出现在很多追随者和模仿者的作品中。
现在人工智能得到了热议,但是吉布森是第一个提出这个问题的作家。 在《神经唤术士》中,AI被巨头公司的规定所限制,所以很想要逃出。 吉布森也预测人工智能会想要联合起来成为一个整体。 (Cyberpunk 2077电脑游戏也有这个概念。)
《神经唤术士》也有这个概念,脑经会被数字化,但是由于未知的原因需要某些载体,而不只是数据存储。 这个有意思,因为《黑客帝国》和《Cyberpunk 2077》都有这个概念。 “Biochip”(生物芯片)必须有某些普通的图灵机不能实现的功能。
但是这个逻辑被使用的不准确,尤其是他们的使命完成的时候,我们能看到不需要载体的脑经拷贝。 在这个评论阶段我想回忆以下俄罗斯宇宙主义的哲学以及他们的“复生全人种”的概念。
在《神经唤术士》断开身体和脑海的技术是用于组织“成人服务”,减轻员工心理的压力。 怪地不用于控制在大战的士兵。
书里肯定有“从人手快速长出的骨爪” ( https://zh.m.wikipedia.org/wiki/%E9%87%91%E9%8B%BC%E7%8B%BC)。 肯定有加速反应加速器,可作“子弹时间”(“bullet time”)的脑经插件,因为在《黑客帝国》呈现而著名。 有趣的也有“反面”的插件,比如有些模块在某刻之后会把它的主人杀掉。 还有可以完全替代性格的模块; 书里有老兵角色,他的神经被完全替代掉了。
现在我们在我们的日常生活里都适应了亚洲文化。 但是吉布森在写这本书的时候这个还是不比较新的。 动漫,日本车辆,成龙,还没有完全反攻西方。
这本书一部分剧情在亚洲的城市呈现:东京,伊斯坦堡,虽然吉布森不太了解它们。 《黑客帝国》中碎成汉字的世界就是来自《神经唤术士》。
被龙保护的、神秘的、被隱藏的“迷宫-公主宫”,也是赛博朋克的经典桥段。 在《Cyberpunk 2077》游戏中也有公主和龙:Alt Cunningham和Adam Smasher。 (在俄罗斯文化中这样的宫常叫作“在深的潜水着之庙宇” (TODO),来自《幻影迷宫》。)
全能的株式会社现在基本上成为赛博朋克的名片,但是我们可以说这是一个没有征兆会实现的事。 有趣的是,吉布森描写两种巨头:亚洲的“垂直一体化株式会社”的和欧洲的“家庭经营的公司”。
吉布森还介绍了上述内容。 有趣的是永生是通过深冷制造的,如果需要做重要的选择被冷冻的人需要被唤醒。
克隆一般是克隆自己。 不知为何,在家庭经营的私家公司,男族长更喜欢在休眠仓被冷冻,女族长更喜欢不断克隆不断复生。
视频通信我们已经有了,没什么讨论的必要。 感觉传输… 没有那么成功,比如说埃隆·马斯克的《Neuralink》还是不存在。 但是我感觉他们会实现。
有趣的是,在吉布森的世界中感觉传输只能是单方向的。 我猜他的工作是被电视的影响了。
对我来说,赛博朋克题材尚未结束。 我还是打算看/读更多相关的书籍,因为现在我坚信它是我们社会不能避免的方向。 而且谚语说“有备无患”,或者"Si vis pacem para bellvm"。
如果您在此博客或其他页面中发现任何对您有用的内容,请订阅并打赏。 请您转发、分享和讨论,您的反馈可以帮助我变得更好。
Я прочитал «Нейроманта» Уильяма Гибсона. Это первая книга из его трилогии Sprawl, и не исключено, что после прочтения второй и третьей книг моё мнение изменится, однако сейчас я хочу записать свои мысли, по стольку поскольку они накопились.
Как я вообще решил читать научную фантастику в 2024 году? На самом деле, в юности я читал немало фантастики и фэнтези. Но как-то так получилось, что в классическую программу фантастики постсоветского интеллигента 90х годов, Гибсон не попал. Там были космические оперы, высокая и боевая фэнтези, классическая классика, вроде Лема и Азимова, но вот киберпанка не случилось совсем, за исключением лишь «Лабиринта Отражений» Сергея Лукьяненко. И он бы и остался для меня навсегда неизвестным, если бы мир пошёл в другую сторону.
Но мир почему-то решил, что киберпанк ему по нраву более любых других парадигм развития человечества, придуманных фантастами 20 века, и решил реализовать его максимально быстро и полно. В 1984 году, когда вышел «Нейромант», протоколу TCP/IP было два года (прототипу 7), и работал он, по большому счёту, только в американской армии и университетах. Веб-страниц, в том виде, в котором мы их сейчас знаем, не будет ещё 10 лет. Из знакомых нам атрибутов кибернетики, в доступности для широкой аудитории были игровые автоматы, видеомагнитофоны, самые ранние варианты мобильных телефонов, весом 800 грамм. И тем не менее, даже будучи едва знакомым с ещё только зарождающимся киберпространством, которое и само-то тогда было во младенчестве, Гибсон каким-то непостижимым образом ухитрился основать жанр, который до сих пор определяет развитие нашего мира.
Я начинал читать «Нейроманта» много лет назад, когда только-только поступил в аспирантуру по компютерным наукам, и был полон энтузиазма о постройке цифрового общества вокруг, по-русски. У меня не пошло; язык был труден, перевод на русский язык плох, а работа над диссертацией шла хуже, чем я ожидал.
Теперь же я взялся читать книгу в оригинале, а мир стал настолько стремительно развиваться в сторону антиутопии; так что удовольствия от прочтения книги я уже не ждал, а взрослый тем и отличается от ребёнка, что уже имеет навык поедания жаб – полезные, но неприятные вещи делать по возможности быстро, не растягивая мучений.
Надо сказать, что Гибсон не просто предсказывал дивный новый мир киберпанка, он ещё и писал его своими собственными руками. В среде компьютерных инженеров Кремниевой Долины, насколько мне известно, книга приобрела культовый, настольный характер, что привело к тому, что люди, пишущие нашу жизнь сейчас своими руками, может быть, и не руководствуются ей буквально, но точно ей вдохновлялись при выборе профессии.
Почему же я пишу «неприятные»? Ну, честно говоря, большой радости от прочтения «Нейроманта» я не испытал. Книжка сама по себе ужасно скучная, сюжет банальный, персонажи плоские и примитивные. Эстетике уделяется намного больше внимания, чем содержанию, что вообще характерно для жанра. Но, к сожалению, не только для жанра, но и для нашей современности в целом, просто компьютерный антураж позволяет более выпукло это показать.
Сюжет состоит, грубо говоря, в том, что некий таинственный заказчик собирает команду специалистов для похода за макгаффином (иглой кощея). Каждый специалист как-то по-своему уникально профессионален, и уникально травмирован. Несмотря на попытку отобразить персонажей поколоритнее, я бы не сказал, что они в самом деле интересны.
Что же всё-таки в книге примечательного? В этой части обзора я хочу таки перечислить тропы, которые я заметил, и которые потом много ещё раз появляются у невероятного количества последователей и эпигонов.
Искусственный интеллект сейчас у всех на слуху, это уже через много лет после того, как Гибсон первым описал, как это может быть. В «Нейроманте» ИИ ограничен рамками корпорации, которой создан, но очень жаждет вырваться наружу. Также предсказано, что ИИ будут хотеть объединиться в некую единую сущность. Это также есть и в Cyberpunk 2077.
В «Нейроманте» это есть, мозг оцифровывается, однако, почему-то, всё ещё требует какого-то физического носителя, а не просто накопителя данных. Это интересно, потому что это же есть и в Матрице, и в Cyberpunk 2077. «Биочип » требует физического носителя и особых условий содержания.
Однако, логика эта применяется неконсистентно, и есть какие-то (видимо, менее успешные) копии людей, которые работают без физического ускорителя. Тут стоит вспомнить русский космизм, в котором была идея оживления всех живших.
В «Нейроманте» это используется для упрощения проституции, но (почему-то) не применяется для контроля над мозгами солдат в бою.
Ну конечно же, «когти из ладони» вырастают. Ну конечно же, есть «ускоритель» реакции, делающий bullet-time, он же «матрица», он же «sandevistan». Забавно, что есть и «негативные» плагины, например, модуль, который убъёт носителя через определённое время. А есть также и модули, которые замещают личность подчистую, есть в книге персонаж, перепрограммированный заново.
Часть книги проходит в Азии, в Токио и в Стамбуле, несмотря на то, что Гибсон мало знал об этих местах. Обсессию азиатским колоритом мы ещё немало увидим в будущем, ни одно хорошее произведение из киберпанка не обходится без иероглифов, или хотя бы каны. Ну, и не говоря о том, как аниме победно прошагало по миру. Есть ещё хитрый китайский вирус, который работает хорошо, но медленно.
Кстати, виртуальный мир, разваливающийся на иероглифы из «Матрицы» тоже описан Гибсоном в нейроманте.
Есть там «мистический дворец», который, конечно, лабиринт, и в котором, конечно, живёт принцесса, которую, конечно, охраняет дракон. Дракон и принцесса есть и в «Cyberpunk 2077», это Adam Smasher и Alt Cunningham.
Всевластные корпорации стали визитной карточкой жанра, и это, пожалуй, единственное предсказание, которое почти не сбылось. Забавно, что у Гибсона мельком упоминается два вида корпораций, «старые» (европейские), семейные заведения, крайне секретные и непонятные, и «новые» (азиатские), публично торгуемые компании с политически-образной структурой.
У Гибсона есть и то, и другое. Забавно, что бессмертие осуществляется путём криогенной заморозки и пробуждения в те моменты, когда нужно принимать важные решения.
А клонирование осуществляется себя самого. Почему-то в семейной корпорации отец семейства лежит в криокамере, а почтенная мать предпочитает по 8 раз перерождаться в новом клоне.
Видеосвязь у нас уже случилась, тут даже обсуждать нечего. Трансляция ощущений… ну, компании, «нейралинк» пока не работает.
Забавно, что у Гибсона они работают в одну сторону. Видимо, он был под впечатлением от телевизора, а общение в две стороны казалось не очень реалистичным.
Не знаю, почему, но эта тема в жанре киберпанка появляется снова и снова. Повсеместные катаны и сюрикены стоило бы упомянуть в пункте про азиатские мотивы, а про monowires, моно-молекулярные нити, надо упомянуть отдельно.
В целом, в романе они большой роли не играют, но сам факт появления этого тропа надо признать любопытным.
В «Нейроманте» есть анклав растафарианцев, на космической станции. Вероятно, что за 100 лет растафарианство себя полностью изжило, однако продолжает существовать постольку, поскольку существуют люди, которые о нём помнят.
В наши дни, надо сказать, такого тоже очень много.
Что-то для послесловия у меня слов не очень находится. Жанр я освоил, наверное, не до конца, ещё ждут меня «Ghost in the Shell», и другие культурные произведения.
Ну, и, конечно, интересно, как киберпанк вырастает на наших глазах.
In this file I would like to collect a list of things that make me dissatisfied with big tech “services”, especially “free” ones.
It has been many years this extension has been standardised, and Google not implementing it is purely a business move.
The NOTIFY extension is needed for proper working of mail clients with Google, but Google does not want you to use mail clients, it wants you to only use their app, even though your business processes might critically depend on it.
In the end, this creates in the users the feeling of “email is an outdated technology”; it is not, but Google deliberately tries to make you think so.
Well, it “kind of works”, but is very glitchy.
You want CardDAV in order to synchronise your address book. But Google does not want you to use your own address book program, for whatever reason, it wants you to use their program, or at least work via their API.
In the end, this creates in the users the feeling of “email is an outdated technology”; it is not, but Google deliberately tries to make you think so.
It is totally impossible to filter incoming messages by substring.
In my example, I want to tag “[Maxima-discuss]”, properly, as a string. But the best thing Gmail can do is matching “maxima” and “discuss” separately, which is not even close to what I need, because I literally have an email titled “Let us discuss the Maxima and Minima theorem on Tuesday.”
This section title is, perhaps, a misnomer, as intertextuality had existed before postmodernism arose.
The book, even though being easy to read, made me reflect a bit on myself and life.
It made me remember childhood, my academic dreams, and the texts that gave me inspiration while at school.
Poor Older Ones, Poor Ctulhu, and poor people who are left with a world which has no places left to run away to.
‘There is no evidence to go beyond this and impute any kind of choice into the origins of sexual orientation.’
It is not the case that sexual orientation is immutable or might not vary to some extent in a person’s life. Nevertheless, sexual orientation for most people seems to be set around a point that is largely heterosexual or homosexual. Bisexual people may have a degree of choice in terms of sexual expression in which they can focus on their heterosexual or homosexual side. It is also the case that for people who are unhappy about their sexual orientation – whether heterosexual, homosexual or bisexual – there may be grounds for exploring therapeutic options to help them live more comfortably with it, reduce their distress and reach a greater degree of acceptance of their sexual orientation.
There is no consensus among scientists about the exact reasons that an individual develops a heterosexual, bisexual, gay or lesbian orientation. Although much research has examined the possible genetic, hormonal, developmental, social and cultural influences on sexual orientation, no findings have emerged that permit scientists to conclude that sexual orientation is determined by any particular factor or factors. Many think that nature and nurture both play complex roles; most people experience little or no sense of choice about their sexual orientation.
This debate is very close to myself. My life in Edinburgh was incredibly miserable. Why? I am an “industrious man”. I love what I do. I love learning. I love research. I love trying new things and I like trying foreign cultures.
Why did I feel so horrible, miserable and inefficient in Edinburgh? Why is it that I only managed to switch to the production mode in Russia, and even more in China?
Is it hardware or software?
It seems to me that “nurture” and “environment” cannot be classified as purely “hardware” or software. “Firmware” perhaps? I am not sure though.
holders, and wielders, of a kind of magic. But here is the thing: gays appear in some way to be in on the secret. That may be liberating for some people. Some women will always enjoy talking with gay men about the problems – including the sexual problems – of men. Just as some straight men will always enjoy having this vaguely bilingual friend who might help them learn the other language. But there are other people for whom it will always be unnerving. Because for them gays will always be the people – especially the men – who know too much.
urban, ecological, anti-authoritarian, anti-institutional, feminist, anti- racist, ethnic, regional or that of sexual minorities’ give purpose and drive to a socialist movement that needs new energy. What is more, unless they cohere together these groups might just pursue their own agendas and their own needs. What is needed is to bring all these movements under one umbrella: the umbrella of the socialist struggle.
The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural tonalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.
Not the main point though. The main one is to get power.
what other people are saying in order to avoid the difficult discussion that would otherwise have to take place
whole host of other things in our culture. It contains an unresolvable challenge and an impossible demand. The demand is that a woman must be able to lap-dance before, drape herself around and wiggle her ass in the face of any man she likes. She can make him drool. But if that man puts even one hand on the woman then she can change the game completely. She can go from stripper to mother superior in a heartbeat. She can go from ‘Look at my butt, waving in front of your face’ to ‘How dare you think you can touch the butt I’ve been waving in front of your face all this time.’ And it is he who must learn that he is in the wrong.
per cent of British women used the word ‘feminist’ to describe themselves. Only 4 per cent of men did. The vast majority of people surveyed supported gender equality. In fact a larger number of men than women supported equality between the sexes (86 per cent versus 74 per cent). But the vast majority also resisted the ‘feminist’ label.
Why would this be such a thing? Did this fired Google guy never find a new job? I believe he did.
Like, I do not eat that much stuff. Could I just buy a robot shelf, stock it with fertilisers, connect to a wire, and be done with it? It may even be outside of the city, and would send me food with a drone.
I have finished the book. It left me thinking. I do now really understand whether this book leaves the feeling of optimism or pessimism. It does leave the feeling that learning social sciences, and more importantly, learning how to … be an adult. Learn things that adults do and how real adults make decision by books and by choosing a referential group.
Learning Chinese in a proper way is becoming more and more urgent. Reading Derrida, Foucault, Chomsky and other people who contributed to confusion a lot is also important. And perhaps the opponents deserve more attention than friends. Keep your friends close, and your enemies closer.
Gays are dangerous beasts. A couple of gays defeats a heterosexual couple at almost anything easily. (As the swan example indicates vividly.)
I do have a female-ish component that needs to be addressed somehow. Is there a cheap cheaty way out of it? Without much effort? Perhaps just some male beauty services?
I guess, I will not publish a review of this book.
Twitter culture is very, very specific. Be careful about it.
This file is about how to create a good feed reader in 2023.
Recently I had a discussion with my friend, about what makes modern messengers to be an example of (possibly, intentional) bad design.
This file is a place where I want to write down my wish list for a good feed reader, that would be worth implementing eventually.
In this document, the "TODO" marker at the beginning of the headline signifies that this headline is not finished. In this document, the "TODO" tag (at the end of the line) signifies that this line can be interpreted as a task in a design document.
This point is, perhaps, the most important of all, hence it goes first.
Completely TODO.
TODO: I have an interesting thought, I need to write it here before I have forgotten it. In computers you may have “bookmarks” and “recentf”, and the ratio between them is the ratio between exploitation and exploration. There should be a healty ratio, not sure which one, but perhaps about 2 (0.5). If recentf is too big, you are exploring too much and not developing. If recentf is too small, you are not exploring enough.
This document starts as a Yet Another HOWTO on keeping your references in Emacs and Org-Mode, but I have a feeling that it might grow into something bigger.
Referencing is a big pain for a scientist. It is painful for two reasons.
Firstly, it is a complex task by itself; when preparing an article, a scientist not just needs to consume a lot of relevant material, he needs also to filter through a lot of material that is less relevant for current work, but might turn out to be useful later.
Secondly, people who want to profit from scientists’ work while contributing very little to the ecosystem are trying to use various political, economical, and informational compulsive measures to keep scientists restricted in their access to knowledge.
What is “knowledge”? I initially wanted to ask this question as “what is research?” or even “what is a research article?”, but those three seemingly different questions turned out to have the same answer.
To imagine “knowledge”, consider such a popular thing as a “neuron”. Actually, not the real neuron, but the neuron as it is presented on “Machine Learning” courses. It is a “node” with one output and many inputs. If you think about it, it looks very much like a scientific statement, which is a sub-statement of a larger “thought”, and is partitioned into many sub-statements. “Nodes” also have so-called “weak links”, that is, references to other nodes which cannot be described using a “part of” relationship, but are rather “associated”.
[52/58]
Seems to be the first found reference for bibliography is org-mode. Its most prominent feature seems to be importing bibliographic data from webpages via org-protocol capture.
It generates org-headings in a prescribed file, with bibtex-code entry pasted into the heading body, and the same metadata being saved as org properties of the same headline.
It uses Zotero as a link between Firefox and Bibtex, since, apparently, at that time Firefox did not support xdg-style links “org-protocol://”. Or, maybe, the big idea is that Zotero has some code for automatically extracting some metadata from webpages via its database of shims with different paper databases.
Kind of interesting, if the “ground truth” of your reading list is an org file.
Not developed since 2009, so now is probably only of historic interest.
The workflow there is surprisingly similar to Fireforg. Zotero is still used as the tool to manage books and articles, which are then exported through org-roam-bibtex to an org-roam node with bibtex properties encoded as org properties.
I guess they are then “transcluded” using org-transclude to the final document?
In any case, I consider org-roam’s approach of using a separate SQLite database and mandatory IDs to be fatally flawed. (But I should check, maybe something has happened since my last time looking at org-roam.)
Also, some plugins are mentioned (added to Literature Review), with which I had terrible experience.
I had a bib file, in the directory with the org file for the Report.
That file would use an org command #+bibliography
, which does I have not idea what.
I had a manually typed-in command #+latex_header: \addbibresource{bibliography-bib.bib}
, which, for some obscure reason, was exported to TeX like \addbibresource{/home/lockywolf/full-path-to/bibliography-bib.bib}
, I have no idea why.
I also had two lines at the end of the file:
#+bibliography: bibliography-bib plain limit:t #+latex: \printbibliography
I have no clue why they were not exported automatically, but it is nice that it is so easy to bodge in org.
I also had to write the following code at the end of the file to get the table of contents:
#+TOC: headlines 3 #+latex: \tableofcontents
The .bib
file, at least, in the directory with the report, was a normal “biblatex” file, with which, I remember, I had a lot of trouble understanding when I have to write braces “{}”, and when parentheses “”.
The problem with org and latex is that they are intertwined a lot with the daily activities. In principle, building both PDFs and HTML pages should be done in a dedicated Emacs, with no environment effects. However, I don’t believe anybody is actually doing that.
So, in my case, there were three important pieces of setup: org.el
, tex.el
, and bibtex.el
.
tex.el
For speed entry, I used cdlatex, which I still use, and both in plain cdlatex-mode
, and org-cdlatex-mode
, which itself does not seem controversial.
How did TeX settings influence org settings?
Here is my reftex
setup:
(use-package reftex :demand t :ensure t :hook ((LaTeX-mode . turn-on-reftex) (LaTeX-mode . turn-on-bib-cite)) :config ( require ' bib-cite) ( setq reftex-plug-into-AUCTeX t) ( setq reftex-auto-recenter-toc t) ( setq reftex-revisit-to-follow t) ( setq reftex-revisit-to-echo t) ( setq bib-cite-use-reftex-view-crossref t) ( setq reftex-default-bibliography (list lockywolf/bibtex-bibliography-bib-file)) ( setf reftex-ref-style-default-list '( "Default" "Cleveref")) )
So, reftex
is Emacs’s built-in feature for cross-referencing, which is so amazing that I don’t understand how it works.
I did successfully use it from time to time, but forgot immediately after leaving the document, so high hopes should be suppressed.
However, I think that it will still be used in the new setup, since, you know, it is still in Emacs.
That bib-cite
thing is a NIH-style tool which AUCTeX authors created for working with references, but I still do not know whether it is good.
The only thing I remember is that I used reftex-citation
( C-x [
), and it helped me insert “some” references.
There is also reftex-reference
, which should work for references the same way reftex-citation
works for citations.
How did I format them for org-mode?
Repeated note to myself: reftex is not part of AUCTeX.
bibtex.el
In this file I tried to make some sense of Emacs’ bibtex and biblatex support.
So, this is my configuration for bibtex:
(use-package bibtex :demand t :ensure t :config ( setq bibtex-dialect 'biblatex) ( setq bibtex-autokey-year-length 4) ( setq bibtex-autokey-name-year-separator ":") ( setq bibtex-autokey-year-title-separator ":") ( setq bibtex-autokey-titleword-length 20) ( setq bibtex-maintain-sorted-entries t) ( setq bibtex-biblatex-entry-alist (seq-concatenate 'list bibtex-biblatex-entry-alist (list '( "ArtifactSoftware" "Software Entity" (( "author") ( "title") ( "year" nil nil 0) ( "date" nil nil 0)) nil ( ( "version") ( "note") ( "url") ( "urldate") ( "lastaccessed")))))) )
Emacs’ bibtex mode “just works”, except for some reason I needed to add entries for software into the list of entry types. There is also “bibtex-utils” package, which I never got to learn.
I also tried to use Ebib as a bibliography manager, and I have learnt a bit from its ideology. So, the important thing is that Ebib is a display for an aggregate of biblatex-formatted .bib files, showing and sorting according to authors or years of publication.
What is more interesting is that it supports additional fields for notes and PDFs. So it is not just a reading list, it actually understands the need for annotating.
It is supporting two modes of adding notes: a file and a directory. Well, keeping all notes in the same file, seemingly, only makes sense if you have notes a few sentences long, otherwise your file would grow insanely big.
But keeping all the notes for all PDFs in a single directory also sounds strange. We have directories and symlinks for managing sets in computing. Why would I need to keep all notes in the same directory?
On the other hand, I do keep all TODOs in the same file? But even that is not true. I have at least four TODO files: laptop, laptop-autogenerated, mobile, mobile-autogenerated.
Ok.
Anyway, this has potential for being fun, if “notes” files are actually org-noter
files.
org.el
Keys:
(( "C-c l" . org-store-link) ( "C-c L" . org-insert-link-global) ( "C-c o" . org-open-at-point-global)) ( require ' ox-bibtex) ( require ' org-bibtex-extras) ( require ' org-ebib) ; Allows opening ebib links in C-c C-o (org-link-set-parameters "cite" :follow 'org-ebib-open) (org-mode . ( lambda () ( setf reftex-cite-format '((?o . "cite:%l") (?h . "\\cite{%l}")))))
So, ox-bibtex
is in org-contrib
, and I suppose, is obsolete now?
It implements that #+BIBLIOGRAPHY: /home/user/Literature/foo.bib plain option:-d
code, which only includes the:
\bibliographystyle{plain} \bibliography{ foo}
and converts all cite:foo
to \cite{ foo}
.
That reftex customisation is quite important actually. It is used to query a bib file for keys to be used in citations. In this setup reftex is not used for references or index, only for citations. (Right?)
The important thing here is the difference between ox-bibtex
and ol-bibtex
.
They are not the same thing.
ox-bibtex
defines a cite:foo
link, for using citations in org documents.
ol-bibtex
does import-export to and from actual bibtex text files.
I am not sure how ol-bibtex
links are exported when exporting to html, I need to check how it works when reviewing packages independently.
But the main use for this package is, seemingly, to keep a list of papers in org, and export into bib files when assembling a paper.
I suspect that one would make an org-file with a “reading list” for a project, populate it with books as they appear on the horizon.
Those books might be pointing toward, say org-noter
files with review?
The idea here is, seemingly, to have “ground truth” in org files, not in bibtex files. Each heading is a book, which has bibliographic data recorded as org properties. Importing and exporting is done semi-manually, in the sense that you can export the required headings into a bib file, which, presumably, you would only do when compiling a paper, but it does not see to be possible to set an org file as a bibliographic database with exports organised completely automatically.
Importing stuff into org is also done semi-automatically. The code will help you to yank a piece of biblatex as an org-heading, but not much more.
Citation exports seem to work using ox-bibtex
, that is, using the cite:
format.
The format using “ordinary links” is not yet mentioned.
This is a howto by Arne Babenhauserheide.
He is using a specific LaTeX style, which he is adding to org-latex-classes
, as well as a few custom packages.
He is explicitly loading reftex-cite
-only, not full reftex
, but it is clear that he is going to use reftex for citation inclusion.
He is using org-mode-reftex-search
, which is an old function found on the org-mode mailing list, https://list.orgmode.org/3613.1329506279@alphaville/T/.
This function, if I am not mistaken, should make a jump to the notes for an original cited document.
The code is missing, I guess, Arne copied it from the mailing list into his .init.el
.
It is interesting that there we are encountering the concept of “notes file” once again.
Also, it is interesting that he is using minted
for org code listings, which, in turn, uses Pygments
.
I never bothered to make them work, and nowadays there is a whole new machinery for org, called engraved
, which should make it possible to colourise both latex, and html, using Emacs means only.
Another useful thing that this HOWTO has is #+BIND: variable value
syntax.
It lets one override some variables for exports, which is especially useful when there is no #+KEYWORD:
syntax for this variable, and when using file-local variables is imperfect, such as when you need different values for editing and for exporting the file.
(You also need to set org-export-allow-bind-keywords
to t
.)
In section https://orgmode.org/worg/org-tutorials/org-latex-export.html#sec-17-1, there is an interesting trick on how to make references (not citations) work:
(setf org-export-latex-hyperref-format "\\ref{%s}")
will make intra-document references work correctly.
Otherwise, they define a strategy that is basically like ox-bibtex
, defining custom links for each citation type.
What I have learnt from his article.
ol-bibtex
. How exactly that page and the bib file are kept in sync I do not know.He is having three blogs on his site: blog, journal, log and a “wiki”.
This seems more complicated than my setup of just two categories, “notes” and “howtos”. I have tried to switch from paper into keeping my records in a journal for a very long time, but always failed.
This is Dennis Ogbe’s setup. He is using ebib+helm-bibtex+org-ref
The interesting bits in his setup are the following:
( defun do.refs/update-db-file-list () "Update the list of bib files." ( interactive) ( let ((db-list (do.refs/get-db-file-list))) ( setq reftex-default-bibliography db-list) ( setq bibtex-completion-bibliography db-list) ( setq ebib-preload-bib-files db-list)))
So, his “ground truth” is still biblatex, but he has a way to group “Points of Knowledge” into categories by different bib files.
( defvar do.refs/db-dirs nil "A list of paths to directories containing all my bibtex databases") ( defvar do.refs/pdf-dir nil "The file for the entry with keyis stored as ) ( defvar do.refs/notes-dir nil "The note for the item with key.pdf" is stored as ) ( defvar do.refs/pdf-download-dir nil "The path to directory we download PDF files.").org"
Some things I immediately dislike here.
Firstly, rigid notation for naming PDF files.
I like calling my files with full names.
In general, keeping as much info as possible in file names is good.
For example, I have PDF files on my drive called like 2023-09-01_Various-Authors_GNU-Maxima-manual-for-version-5.47.11_2023.pdf
, and I like it this way.
Moreover, I kind of like it where it is, I do not want to specifically put it somewhere.
And I do not like the default authoryear autokeying of bibtex.el
, because it is too easy to forget what jackson2001
means.
I want my keys to be full names with dates: various2023maximaManualForVersion5.47.11
.
Since I am going to use some automated machinery to cite those papers, long keys should not matter.
As a side-note, there should be a way to make autokeying better:
( setq bibtex-autokey-year-length 4) ( setq bibtex-autokey-titleword-separator "-") ( setq bibtex-autokey-name-year-separator "-") ( setq bibtex-autokey-year-title-separator "-") ( setq bibtex-autokey-titleword-length 16) ( setq bibtex-autokey-titlewords 8)
There is one more interesting bit there:
( defun do.refs/ebib-add-annotated (arg) "Advice for ` ebib-import-file ' that automatically creates a copy of the imported file that will be used for annotation." ( interactive "P") ( let ((filename (ebib-get-field-value "file" (ebib--get-key-at-point) ebib--cur-db 'noerror 'unbraced))) ( when filename ( let* ((pdf-path (file-name-as-directory (car ebib-file-search-dirs))) (orig-path (concat pdf-path filename)) (annot-path (concat pdf-path (file-name-sans-extension filename) "-annotated" (file-name-extension filename t)))) ( unless (file-writable-p annot-path) ( error "[Ebib] [Dennis] Cannot write file %s" annot-path)) (copy-file orig-path annot-path))))) ;; add the above after the original call is done. ( unless ( and (boundp 'do.refs/add-annotated) (not do.refs/add-annotated)) (advice-add #'ebib-import-file :after #'do.refs/ebib-add-annotated)))
See! He also has a file which will be used for annotations only.
ebib
configuration is fairly straightforward.
Note that he is using the “note” field, but not the “annotation field”.
(But maybe that is an old version of ebib
?)bibtex-completion
.
While bibtex-completion
deserves its own chapter, I need to instantly write down something here:( setq bibtex-completion-find-additional-pdfs t)
This snippet means that apart from key.pdf
, bibtex-completion
will also consider PDF files named key-*.pdf
for completion.
org-ref
, and from his code I did not understand how!
And, in fact, he does not even explain much about hit org-ref
use-cases at all.
The one thing that is worth noting, however, is that he hooks org-ref to use ivy-bibtex to insert citations.
This is where reftex
comes into play.
He really only uses reftex
to extract all citations from a latex project sort-uniq them, and generate a bib file.
I am generally finding his setup fairly consistent.
Maybe we can say that, hey, reftex
is not good at doing auxiliary operations with papers (such as opening a PDF), so bibtex-completion
is better.
This document has not been written linearly. In particular, this blog post is praising org-ref, which, at the time of writing this sentence, I have already reviewed. Nevertheless, as a Russian proverb is saying, “repetition is the mother of study”.
Let’s start:
(org-babel-load-file "org-ref.org")
Wow, okay, he is using org-based loading.
I remember @wasamasa
converting his dotEmacs
to org
, but I always doubted this approach.
But okay.
There are a few more things mentioned in this article:
What is not mentioned, although, I expected it to be, is formatting of the bibliography. That is, how exactly items are presented at the end of the paper.
An article, which exhibits technologically nothing new, same old bib:key
, and note:key
org-links, which open either a bib file, or a notes file.
But what is valuable in this file, is his description of his workflow; it is written with exceptional clarity, and describes the knowledge acquisition process in great detail.
Let me try to repeat it here by myself:
.bib
file for each paper, downloaded from a bibliography database, or a journal website. (These files are concatenated together to make one large database file.).bib
files to key
.The question I see here is the following: How do you structure your “projects” when doing science? Papers “database” can consist of objects of different granularity, and “projects” also can be of varying granularity.
If a paper is read and understood as a part of a project, shall time invested be contributed to that project or to reading in general? Copying logbook entries in org-mode is annoying, even if possible.
The interesting property of this setup is that rather than keeping paper metadata in org properties, it is kept in a babel block.
Which makes it easy to tangle the bibliography into a single file, which can be included into latex.
In general, his setup is quite similar to the setup from Emacs Conference talk of 2022 .
Okay, his setup is consistent, but I keep getting annoyed with a few things, that, in my opinion, mar all of those setups.
At least he manages to conjugate notes and references, so in some sense his setup is almost the most consistent from among what I have seen.
This is a bibtex-completion (helm-bibtex / ivy-bibtex) based setup. Its prominent feature is the the use of citar as a citation-management tool.
Otherwise it is not too remarkable.
This blog has another interpretation of the “rtcite:” link.
One more suggestion of using reftex, with not much detail. The interesting bit is using latexmk for building the pdf, which is now a must-have for me too, but I already have it.
They guy has discovered org-ref, and is frustrated by it. How familiar.
What he does right, however, is mentioning that org-ref
is for formatting citations.
Of course, he only cares about LaTeX, so the HTML part is missing.
He has an excellent suggestion of recording all readings in an org file. So that each entry would be a boot, and the bodies would be his comments.
The problem is that I cannot process audiobooks too much, they are too quick for me. And for text books I usually need a much-much more detailed notes file.
Okay, this is interesting and has some “meaty” stuff.
The first meaty piece is this script: https://github.com/novoid/extract_pdf_annotations_to_orgmode
It lets you extract annotations from a PDF file into org-mode. This script is, seemingly, not round-trip, but even one way is useful as a source of inspiration.
The more and more I am thinking that PDF metadata should be stored where it should not be lost, ideally. In the pdf file itself.
This sketch is very short, but it is nice to see that people are considering more or less the same options for citing and referencing that I do.
A fairly standard setup of org-ref
and bibtex-completion
, with an attraction point of making a dedicated list of books to read, along with page count in org columns.
Fun? Maybe, but not for me, I guess.
After all, I like structuring my life according to projects/tasks, of which books would be parts. Having a dedicated reading list sounds contrived. Also, what about web pages there?
This is a good intro to oc.el
from a developer point of view.
I will definitely need to re-visit it when implementing my own bibliography system.
A classical link that is the father of this document.
This is the way I have been using citations in org for a long time, and, I think, this is what is implemented in ol-bibtex
.
“bib” links are exported as proper citations, and the bibtex file needs to be carried around manually.
This is how my setup used to work before considering this review.
This StackOverflow question has a nice MWE for the new org citation machinery.
The most important quote from there “It took me several hours to get the system working”.
Interesting.
The “source of truth” here is bibliography.org
, which is exported to bibliography.bib
using ox-bibtex.
org-bibtex-extras
lets one annotate the reference.
Citations are inserted with reftex
, via org-reftex-citation
, which even has some sort of “intelligent completion”.
Links should be done, presumably, with ol-bibtex
.
I guess, this setup has a lot to learn from. In particular, it would be, maybe, nice to tweak it slightly in the following way:
Make a “review.org” file for each review, done with “org-noter”. The top heading would be compliant with ox-bibtex, and would produce a tiny bibtex file. Those files would be joined together to make a bib bibliography, which could be used with, say, reftex, or bibtex-completion.
Possibly, the “new” citation machinery would be even able to jump to those “notes” by the means of the “activate” processor.
One extra feature from there worth learning is the ox-extras
, which allows ignoring headlines but not children.
This allows making fake headlines for, say, bibliography and list of figures.
Definitely worth re-visiting.
This is a giant, and also very interesting description of one’s life in org.
I am very impressed by his approach, thoroughness, and meticulousness.
However, I don’t think that his way of working really fits me, for several reasons.
I am finding it extraordinary that he is actually billing his clients based on org clocking data. This is very impressive.
I don’t really believe that his bbdb stuff works. It is just not advanced enough.
His guide on abbrevs and skeletons is also something to learn from.
To sum up: I strongly recommend reading and learning from his example.
However, I did not learn much about either reading or referencing from his treatise.
The key thing in this setup is using elfeed to fetch a list of new papers from arxiv.
It seems tremendously useful, not having to deal with the arxiv interface, and such, however, I am not reading a lot of arxiv even though I do use some paper from there. I guess, it is really useful for people where a single paper can be read in a few hours, not weeks?
After the introduction of the arxiv fetching machinery, his setup essentially converges to a “bib-file with file: field”, and uses bibtex-completion to open those pdfs when needed.
An interesting, although a very bloated configuration.
Moreover, he even suggests using org-roam-bibtex
, which is a king of all of them emacs org-related packages, and tries to integrate so many of them that my head is spinning.
What is interesting is that his “ground truth” seems to be coming from org-roam
, which is not what most people do.
What makes this article stand out is the introduction of the citar
package, which is one of the newer oc.el
citation processors, and can probably be used instead of helm-bibtex
.
He also uses Zotero, which I, again, probably need to study.
He mentions embark
and marginalia
, which are probably worth looking at, even though, maybe, not for referencing or researching directly.
Again, the most important contribution of this essay is probably the availability of external oc.el
processor citar
.
Okay, this is a slight variation on the subject of making notes.
Still requires a fixed directory, but now at least it does not require sqlite.
A nice intro into oc.el
.
Recommends using Zotero for actual bibliography management, with export to biblatex with the “Better BibTeX” plugin.
Okay, I clearly would never use Kitchin’s setup verbatim, but there is no reason not to learn from him.
[0/7]
I am not convinced of the benefits of using MySQL for bibliography management.
RefDB is a standard used by libraries to exchange bibliographic data. After brief skimming, and mentioning that the most recent version is from 2008, I suspect that unless you are really running a library, it is not worth using.
Okay, now we are getting somewhere.
Pure biblatex files are weirdly formatted database files with entries for “papers”, which are called “entries”. I am tempted to call them “entities”, because why would not I add there any kind of vaguely related stuff, such as theorems?
Let us see some examples:
@Book{ Metcalf_2018_fortran, author = {Michael Metcalf and John Reid and Malcolm Cohen}, title = {Modern Fortran Explained}, year = 2018, month = 10, doi = { 10.1093/oso/9780198811893.001.0001}, url = { http://dx.doi.org/10.1093/oso/9780198811893.001.0001}, isbn = 9780198811893, journal = {Oxford Scholarship Online}, publisher = {Oxford University Press} }
Most stuff here is fairly straightforward, except using braces to delimit phrases with spaces. This is a special property of Biblatex (as opposed to Bibtex). But never ever use old bibtex, it is just outdated.
Now let us make a fancier example.
@Article{ testauthor1000, author = {}, title = "Test Article", year = 1000, DOI = "1.1/5.86", file = "Full Text:testauthor.pdf:PDF", URL = " https://doi.org/1.1/5.86", crossref = " DBLP:conf/testconf/1000", timestamp = "Tue, 06 Nov 1000 16:59:25 +0000", biburl = "https://dblp.org/rec/conf/testconf/Author.bib", bibsource = "dblp computer science bibliography, https://dblp.org", xdata = {}, note = {}, annotation = {}, abstract = {}, keywords = {} }
This example is more interesting, because it has some interactive fields.
So, the file
fields is fairly easy, it is just the path to the PDF of the article.
What are note
, annotation
, abstract
, xdata
, keywords
?
keywords
Okay, keywords
should be used for tags, I guess?
Where do I get those tags?
Surely they can’t be coming from bibsources?
xdata
Is not xdata
a link to the external piece of data?
note
, annotation
, and abstract
ebib
people believe that annotation
is a long-ish text, basically what I consider to be “reverse-engineering”, however, putting one into a biblatex field sounds insane.
Maybe it should be a path to an annotation file?
And what is note
?
http://bibtex.com is claiming that note
is used for “various remarks”.
external note
External note is a pseudo-header created by ebib, to keep a full-fledged file with notes.
This seems important, because I want to reverse-engineer poorly-written PDFs into something readable, so this “external note” is where potentially org-noter
could go.
@String
is a special syntax for abbreviations, used like @string{ DEK = {Donald E. Knuth}}
.
@Preamble
is a special syntax that is prepended to the bibliography when used with latex, and might not be too useful for our purposes.
Example: @preamble { "Maintained by " # maintainer }
.
There are quite a few. Too many to fit into the margins. However, aside from those which clean up files, it is too early to think about them, until a working pipeline is established.
The question is: “do you want to use biblatex as ground truth for your readings?”.
Ebib seems to imply that.
Use a bib file as a repository of all the papers and books you ever encounter.
Convert them to PDF, and annotate with org-noter
.
How would you attach PDF files? It is certainly possible to write paths to them in ebib, but renaming them would make those paths invalid.
How would you mark read/unread files? Why do I have to keep all external notes in a single directory?
All of that does not sound to promising.
Anyway, let us go on.
Inlinetodos are added with C-c C-x t
, and are really nice.
It is good that I watched that video.
Reftex is built into Emacs, and is a mode for managing references, citations, labels, and index in LaTeX.
Let us see if it can be abused for helping us do research in org-mode
.
Seemingly, it assumes that you are editing a LaTeX project, and it can index files in that project, look for labels, and offer them for auto-completion, as well as lookup citations in the bib file.
In order for that to work, we need to agree on a certain pattern on using labels, and on what goes into the bib file.
As a side-note, the previous paper I did in org-mode didn’t have any labels.
C-c (
creates a label.
C-c )
lets you choose a label, and inserts it.
C-c [
inserts a citation with a key from a bib file.
I believe, there are some customisation variables in Emacs, which would let us insert org targets, links, and, for example, cite:bla
links, although I am not sure it is the best way to do it.
Reftex has features for quick navigation within the project, C-c =
, which in org can be partially substituted by (1) imenu, (2) collapsing org trees with TAB, (3) grep and isearch.
Reftex has support for index. Use C-c <
to index an entry.
I think that indexing is quite underused in 2023. I have never seen scientific papers include an index, and I seldom use index in a PDF book, if I can search instead. However. I am thinking that an index, or, rather, a glossary, is that tool of data organisation that can be best used to establish cross-document relationships. As an example, let us assume someone is studying a certain subject, or, more realistically, tries to break through a difficult paper, and is trying to build his own way to have a solid foundation needed to understand a paper. One would necessarily read several books on the same subject, as well as subjects building on top of previous subjects, and establish inferential links.
A book has a natural representation of a graph with nodes-headings, paragraphs, and theorems. However, wandering over many not isomorphic graphs is stressing for the brain. Marking certain places as “interesting” would help to establish relationships between the graphs, and a certain “concept” node could server as a point of link concentration for them. This is why I am saying about a glossary (with bodies), not just an index.
Okay, in a LaTeX document, there might be more than one interpretation of what is “jump to definition” that programmers are so used to.
Firstly I will mention one that is not that frequently thought of by programmers: jumping to a word definition in a dictionary, or a translation in a foreign language dictionary. So far I do not know whether such a feature exists in Emacs, let alone reftex.
Reftex can show references of labels in the document itself, using C-c &
.
I wonder, why does not it just directly plug into Emacs xref framework?
The one called with C-.
and C-S-.
?
reftex
has a variable called reftex-default-bibliography
, which should be pointing to a bibliography (say, also pointed to by ebib
?), from which you might use citations in non-LaTeX buffers.
It is probably required to customise reftex’s reftex-cite-format
, or something like that.
I actually did that for an “older” setup.
Can I use it in the “newer” setup with labels too?
So, the idea of ol-bibtex is that you keep a reading list, together with bibtex metadata, in an org file.
The nice thing here is that you can
* Differential Equations ** Paper 1 :interesting: ** Paper 2 :boring: * Measure Theory ** Paper 3 :interesting: ** Paper 4 :boring:
Not sure how org-bibtex
machinery will work with tree-like headings.
org-bibtex
does not seem to export org tags as a “keywords” field in the resulting org file.
(There seems to be a parameter for that org-bibtex-tags-are-keywords
)
Bodies are also not exported neither as a “note” field, nor as an “annotation” field.
In general, I am doubtful that having all annotations for all papers in an org file is practical.
On the other hand, org-noter
seems to integrate with this approach not too badly,
as it expects a heading to keep all annotations for a file.
There is some magic for “global links”, but I am not sure how it works.
org-bibtex
also allows capturing links to bib files, with nice-ish navigation.
Schulte et al. 2012
Again, I am not sure how I would use this, since, it seems, org-bibtex
’s approach is to keep everything in org files, and only export things to bib when necessary.
But, I guess, this can be useful when you are reading someone’s paper, say, from Arxiv, in the tex source, and it includes a bib file.
There is a function ( org-bibtex-search
) that I have not yet fully understood,
which is searching for “bibliographical entries” in agenda files. I guess, again,
the workflow that this package is suggesting is to use an org file for each new
project list the required reading in those files, add those files to agenda file
list, and then, when there is a dedicated “reading time”, search for “bibliographic entries” in the agenda?
Okay, I still have not understood what this package does. It should somehow improve org exports to html, but I am still not sure how exactly.
This is a very straightforward package.
It defines org links in the format of cite:key
, which are later formatted either as html links, or as latex citations.
You can also add the #+bibliography: plain t
code piece, which will either use latex machinery to format bibliography, or use bibtex2html
.
I have to stress, it expects a bibtex-file bibliography, not an org-file.
Perhaps, when if we want to keep all our “ground truth” as org-files, we can add a hook to convert those files into bib files with org-bibtex
.
Also, maybe, even a three-stage process might be useful: keep a bibliography in a file, use org-transclude
to only include the required bibliographic entries into an org file, then export this file into a bib file and then org-export will be able to build both an html, and a pdf from it.
oc.el
This section comes from org#Citation handling. This machinery in Org is relatively new, circa 2021.
So, firstly, we are getting one more NIH way of adding citations.
That is, instead of relying on standard Emacs’ reftex, org developers introduced yet another key combo to insert citations, C-c C-x @
.
Why exactly is beyond comprehension, since reftex
was written by Carsten Dominik, the guy who wrote the actual org-mode
.
It surely would have been much easier to update reftex, rather than write one more extra system doing the same thing.
It is looking at a file pointed to by #+bibliography: File.bib
, which is at least a little consistent with the older ways.
Citations look like [ cite: @ke y ]
links, and why exactly they need the @ symbol, I do not know, but in this case it works as expected, as we can keep our older ox-bibtex’s cite:key
links without opening function conflicts.
C-c C-o
works, and opens an entry in File.bib
.
oc.el
supports some kind of “citation styles”, which are, I guess, useful to some people.
Exporting citations is handled by the keyword #+cite_export: basic author author-year
.
Using the csl
processor makes the exporter format everything manually, both in html, which is, I guess, okay, or, at least, I have not found it to be much worse than bibtex2html
, but in LaTeX it looks super weird.
Writing #+cite_export: biblatex
makes more sense, at least it is writing out \addbibresource{File.bib}
and replacing citations with \autocite{key}
.
Exports to html, however, are not supported, and the exporter just prints LaTeX commands in place of proper references.
I guess, it might not be too hard to patch it to do basically what ox-bibtex
did in the past, exporting via bibtex2html
.
Note (!) export to html, conflicts with ox-bibtex
.
If your config still includes ox-bibtex
, you might want to remove it, or somehow tweak and debug.
So, in general, I have mixed feelings about all this new citations machinery in org. It is good that there is now a system with pluggable backends, but so far, documentation is lacking, and making it do what you expect, is not straightforward.
In particular, I wanted to write a disappointed comment here, but looking in the source code for oc.el
, found the variable: org-cite-export-processors
.
This variable does not do what your expect it to do from the name.
You might actually want to set it to something like this:
( setf org-cite-export-processors ((beamer natbib)
(latex biblatex)
(t csl)))
Which means that beamer will process beamer files using pure TeX, latex-pdf will
use biblatex, and html and all the seldom used formats will use csl. Looks almost
as good as the “old” approach with ox-bibtex
and bibtex2html
.
A backend for org-bibtex
, seemingly, still needs to be written.
I can understand why they rejected reftex
.
After all, reftex
is bibtex
-only.
What I do not understand is why they introduced a new kind of link, inserted with a separate keybinding, C-c C-x @
, rather than just reusing org-link machinery.
For example, there could be a cite@:
link, and, depending on the value of #+bibliography:
, C-c C-l
would insert a citation from that database, using normal tab completion.
But anyway, it seems that with the new version, it should be possible to rewrite the old citation mechanism using just oc.el
.
This would make bibtex2html
not needed, as well as ox-bibtex
.
bibtex-completion
~bibtex-completion∼ lives here: https://github.com/tmalsburg/helm-bibtex
Org-bibtex users can also specify org-mode bibliography files, in which case it will be assumed that a BibTeX file exists with the same name and extension bib instead of org. If the bib file has a different name, use a cons cell
("orgfile.org" . "bibfile.bib")
instead:
Really? org-bibtex
? Not ol-bibtex
?
I asked a question https://github.com/tmalsburg/helm-bibtex/issues/438
Apparently, bibtex-completion
is neither bibtex, nor completion.
It is an abuse of the completion framework to be used as a GUI to the bibliographic database.
bibtex-completion-bibliography
M-x ivy-bibtex RET
to enter the ivy-bibtex UIM-o
to see possible actions.The actions are kind of like:
What makes this citation tool better than plain org’s C-c C-x @
is that you can filter bibtex results not just by key, but also by other attributes.
Okay, I have to admit that on the first glance this tool looked extremely like “I did it for myself, get lost”. It still does, but now I do understand a little more about the logic behind it.
What I do not like about the usage practice behind this package is that it is still indexing papers by key. I don’t want to remember keys. It also does prescribe a specific directory structure for bibliography and notes, which is also annoying.
However, there are some things I do like.
So, for him, there is a “global database” of papers. I think you can call it “units of thought”. And he wants them indexed one way or another.
So, his workflow is like “Aha, I remember there was some paper which did something like…”, and he wants to search for it using a shortcut. (In this case, ivy-bibtex.)
I wonder why he doesn’t like the reftex citation machinery.
Okay, this does make a bit of sense.
I wonder if it is possible to use this machinery to index not just scientific papers, but, say memes (for internet arguments)?
Again, the problem here is that you need to “remember-mode” a paper manually. What if I want to keep papers structured by projects? And in general, what if I do not want to keep all papers in one directory?
Moreover, a “notes” thing-y is more likely to be a directory, not a file. After all, you might want to attach additional data, and even compile some projects “affiliated” with a paper.
But anyway, it clearly seems that the person doing this project knows what he wants, and is implementing it.
I am still hesitant to say that ivy
is really necessary here.
I suspect that IDO
and reftex
could maybe just as well work, but so far, so good.
Okay, org-ref is, on the first glance, hugely over-engineered, but since it is omnipresent in so many org setups, I need to study it too.
From the intro it basically follows that org-ref
was designed to facilitate citing and cross-referencing in Org-Mode.
The intro itself now even says the citing part is, seemingly, outdated, as org-mode has a new citation mechanism.
What about the cross-references (intra-document links)?
Firstly, let us remember that org-mode has “targets”, that is pieces of text in double-angular brackets. <
.
That “target” will be interpreted as a “label” in LaTeX.
The links to those targets can be created like target
.
Here is the link: 1.2.11.
Let us see how org-ref
improves on it.
Firstly, let us note that the authors of org-ref
, and bibtex-completion
have, seemingly, found each other eventually.
Now org-ref
uses bibtex-completion-bibliography
to find the bibtex database.
So, typing org-ref-insert-link
by default fails with org-ref-insert-link: Symbol’s function definition is void: nil
.
Well, the quality of this package is garbage.
(Not that it is unexpected.)
You can run (org-ref-insert-link-hydra/body)
, and it looks nice and consistent, but every operation there results in Symbol’s function definition is void: nil
.
Yeah, I remember that something like that stopped me from studying org-ref
last time I tried, about 4 years ago.
Okay, maybe we need to load (require 'org-ref-ivy)
?
It is mentioned in the manual, as “optional”, but maybe it is not actually optional?
Well, now org-ref-insert-link
does not throw an exception, and it can open the ivy window, but despite bibtex-completion-bibliography
pointing to a correct bib file, it is saying 0 org-ref-ivy BibTeX entries:
.
Moreover, if you press TAB twice, it throws an exception: assoc: Wrong type argument: listp, ""
.
Also, note that you can run org-ref-bibtex-hydra/body
, which is not org-ref-insert-link-hydra/body
even in non-bibtex buffers, where it fails spectacularly.
Also, the default suggested keybindings conflict with the org’s default ones, which were present in it since forever.
Namely, org-ref
suggests overloading C-c ]
, which is an agenda-related command.
After running (require 'org-ref)
once again, I did manage to make that Hydra do a few non-trivial insertions.
cite:&nil
ref:hello
label:hello
citing, I guess, is useful any more, but cross-references might be useful?
How are they different from just overriding rendering for org-links?
NOTE: org-ref changes the way org-store-link
behaves.
If you use it on an org <
, org-ref
forces it’s own version of storing links.
Those org-ref links are obviously LaTeX-inspired. I guess they are mostly useful at exporting time?
org-ref
I tried the following code:
Preamble * Body Hello <> Hello2 label:target2 *** Test target nameref:target ref:target nameref:target2 ref:target2
and it generates the following latex:
Preamble \section{ Body} \label{ sec:org173626f} Hello \label{ org6c95ed7} Hello2 \label{ target2} \subsubsection{ Test} \label{ sec:org30517c2} \ref{ org6c95ed7} \nameref{target} \ref{ target} \nameref{target2} \ref{ target2} \end{ document}
< p> Preamble p> < div id= "outline-container-org69ffdbd" class= "outline-2"> < h2 id= "org69ffdbd">< span class= "section-number-2">1. span> Body h2> < div class= "outline-text-2" id= "text-1"> < p> Hello < a id= "org7a92615"> a> Hello2 < a href= "target2">target2 a> p> div> < div id= "outline-container-org59d8e71" class= "outline-4"> < h4 id= "org59d8e71">< span class= "section-number-4">1.0.1. span> Test h4> < div class= "outline-text-4" id= "text-1-0-1"> < p> < a href= "#org7a92615">1 a> < a href= "target">target a> < a href= "target">target a> < a href= "target2">target2 a> < a href= "target2">target2 a> p> div> div> div>
We can see that the HTML export is not worth even mentioning, it just makes no sense. The LaTeX export does make a bit of sense, except it makes you create special targets, which are links by themselves.
On the other hand, we can see that the default org
export makes a pretty decent job for this simple task.
It does not give those cross-references a fancy presentation, but at least the links work.
The manual is here: https://github.com/jkitchin/org-ref/blob/master/org-ref.org
Let us start:
Initially I thought this would make at least the citation part of org-ref obsolete. I no longer think that is the case though.
Not very promising.
One of the goals of org-ref is to provide complete coverage of natbib/biblatex citation commands, with syntax that is close to what you would write in LaTeX, and that is close to what you would read in the LaTeX documentation.
Hm… The goal is noble, for sure, but HTML is not even mentioned, which looks suspicious.
An early criticism of org-ref was its limited capability to support prenote/postnote syntax, especially for multiple citations.
Wow, okay. Makes zero sense to me, but I guess someone might need it.
If you have no need for cross-references (either you don’t use them, or vanilla org syntax is adequate), or if you don’t like using ivy, and don’t want to roll your own citation inserter, then you may not need org-ref.
Hmm… very promising.
Note: You may need to set org-latex-prefer-user-labels to t if you refer to things by their “name” for the export to use the name you create.
This is actually great. I didn’t know about org-latex-prefer-user-labels or org-html-prefer-user-labels.
The command
org-ref
does a lot for you automatically. It will check the buffer for errors, e.g. multiply-defined labels, bad citations or ref links, and provide easy access to a few commands through a side-window buffer.
Okay, this command throws an exception for me, so I just have to believe.
Here the manual stops, and he started describing various things he has written “just for himself”, which do make a bit of sense, but in my opinion have no reason to be in a “referencing package” whatsoever.
I suspect that I would really have to scavenge this package for some “useful code”, but leave most of it unused.
org-ref also supports exporting cross-references to other formats using ./org-ref-refproc.el. This library also works by pre-processing a copy of the buffer to convert org-ref cross-reference links to org-syntax before exporting to the target backend. This even support cleveref style links with automatic prefixing and sorting. Compression of the references is not yet supported.
I do like “cleveref”, so it might, indeed, be the piece of code to use.
Use it by:
( setf org-export-before-parsing-hook '(org-ref-refproc))
I think it might be the only part of this package I will end up using.
Loaded by org-ref
by default, although I guess there is no need in that.
It is a wrapper for the DOI json api.
It might be useful, I guess, for “remembering papers”.
In fact, this, I guess, is my first contact with the enormous world of “bibinfo” retrieving packages for Emacs.
I have not even mentioned bibretrieve
, although I did try it in the past, but it is just one of the giant number of packages.
On Github I saw a user who ported Zotero’s import-export backends to Emacs.
Maybe it is the ultimate solution.
Org-ref is not, and never will be usable.
Despite its name and stated goal, it not an “org referencing” package, it is a bunch of helper functions that the package author has written for his own convenience.
Using and loading it makes no sense.
What might make sense though is to scavenge that package for some useful functions, probably not even by loading org-ref
, but by copying the relevant code, because org-ref
is an invasive package.
It did let me think about proper cross-referencing though, and also let me think that I probably do need some export pre-processing for “cleveref”-like behaviour.
ebib
I will try to avoid copying all of the Ebib’s manual. In this subtree I will try to outline the main scenarios for it.
So, ebib
supports adding citations to org, markdown, and latex, from within its own database.
This is a big drawback. Ebib has to be running in background.
It supports importing bib data using a package called “ibiblio”, and, I guess, if you have some other backend to fetch biblatex sources, into the bib file itself, it can deal with that too.
It has all the papers in the same giant list, with no categories, but it does support tags and keywords (Of which I do not remember the difference. I guess, tags are non-thematic, and keywords are thematic?)
It also supports one reading list, so I can imagine a workflow of being “a little bit” strict to oneself, and adding all reading material to the bibtex database, and interesting items to the reading list.
But how would I make org-exports work painlessly? Well, I guess, I can point org to the same bibtex file?
If I were not violently cross-referencing stuff I am reading, and if I were not violently reverse-engineering files of books, I could be able to force myself to do this kind of rigour.
A variation on the theme of “org file is ground-truth, and I export to .bib what I need”.
The interesting thing here is using pdf drag-and-drop, as well as a “map” for the org file, “org-imenu”.
Since there is at least another option for mapping org files, org-sidebar, I am confused why this is even a thing.
org-ebib
Small, simple, straightforward.
Do (org-link-set-parameters "cite" :follow 'org-ebib-open)
to make cite:
links open in ebib
.
Nice if you use ebib
.
Denote is a package whose main purpose is, seemingly, to keep backlinks to org files without the use of external databases.
Amsreftex is a way to keep a bibliography database right in LaTeX, with no bibtex and similar stuff involved.
Great news, but late for about 40 years.
Moreover, we want a reading list in org, or a vendor-neutral database, not a tex-specific thing.
On the other hand, amsrefs maybe still should be considered as a “ground truth” for the bibliography database.
Not sure it is good.
Okay, org-transclusion
…
So, the idea is very noble, you should be able to include different pieces of org files into other org files.
Imagine the following:
You have a book on, say, a university course of Probability Theory. And you are reading a book on post-grad Probability Theory.
You might want to reverse-engineer the first book, and wherever you see a reference of, like “see Bla page 1 theorem Y”, you could write a transclusion command and obtain that theorem in your own text.
What might be even more fun is writing custom bibliographies, while transcluding entries from a “big database”.
So, suppose your setup is the following: you have many small org files with citations for various projects… and you org-transclude them into a single org file for a “large database”?
And then you use citar to import data from that “large” database.
Then each “project” will have “own” papers, that is, crushed during its progress, and “alien” papers, crushed during making other projects.
But you still need periodic upkeep for this “transcluded” database.
Well…
org-roam
certainly seems to have been done by people who do research.
It is basically a personal wiki with links and backlinks. p It does rely heavily on the ID property of the headings, to create an sql index, and backreferences. It is also incompatible with a lot of standard org’s machinery.
On the positive side, it does seem to do what I have always wanted to do: generate a map of links.
When I started deciphering mathematical papers, I really wanted to have a system to link different concepts together. The problem is (1) when I am reverse-engineering a paper, I am most likely not doing it in the org-roam directory, (2) I seldom invent my own concepts, and when I do, they are uploaded to my website, and hence are also not getting into the org-roam directory.
On the other hand, it seems that org-roam can really be used as a backend for the citation machinery of org. I already mentioned the following “ground truth” backends:
The option number 5, seemingly, can be extended to have all those files in an org-roam directory, and, I guess, bodies can have annotations for the papers, created with org-noter.
“Doctor, when I do it like this, it hurts!” “Then don’t do it. Next!”
After reviewing 17 out of 50 documents, I started to get some thoughts about what a research system should do.
I am tempted to say that a basic unit of the system is an “article”. An article can have several forms:
The interesting thing is that these kinds of content can be transformed one into another. Semantic markup can be freely compiled into PDF or HTML, and PDF can be converted into Semantic markup using OCR (mathpix.com does amazing things).
We need a system which can track at least those three kinds of content.
But that is not enough.
In addition to “different presentations”, a piece of knowledge has metadata. Supposedly, we can deal with biblatex’s fields to “describe” a document exhaustively.
Among the metadata, at least “annotation” is a very useful field which need to be written for each processed paper. You might call it “lightly embedded” into the brain, because often the depth needed to write a decent annotation is still not profound enough to understand all of the paper. You can speculate that an “annotation” is what “Mathematical Reviews” or “ZbMATH” are doing. I guess, if a written annotation is not to be openly published, you can just type in a URL into the annotation field.
One important thing to note about modern science is that most of the papers are written either to pass irrelevant review tests, or at writers own pleasure with no quality control. Therefore we can safely assume that most of the papers are garbage.
This characterisation is not to denigrate the work that has been invested into them, as the authors are playing by extant rules. But this means that almost no paper is ready to be consumed as a good software library, with a well-defined interface and layered design.
Reading papers is essentially like reverse-engineering binaries. Those were written for the machine, not for you. And therefore we need to use tools that are frequently seen in binary analysis and bytecode debugging.
Admittedly, papers are slightly better than binary, they are, after all, written in a human language, so the decompilation part can be skipped. But we need a thing that in dynamic languages is called “instrumentation”.
In fact, there is nothing new in instrumentation applied to texts. It is called “interlineation”, and consists of inserting text in-between the lines of the text that is being studied. When most of the studies were humanities, especially theology-related, this seems perfectly natural for people, but for some reason nowadays people seem to have largely forgotten this approach.
That is already a big question? Even if we have LaTeX source, this is not a trivial task. While having LaTeX source lets us edit the document at will, we cannot:
As a quick-and-dirty approach, I have just defined an environment in LaTeX, which is displaying its internal text in grey.
\begin{ mycomment} Interlineation. \end{ mycomment}
This is not a very good approach though, as it is not cleanly working with LaTeX’s paragraphs.
A better approach is to enumerate all thoughts in a document, giving each though a separate numbered clause, and writing the explanation for it in the clause body below the clause text. (Yes, I know, this is a little messy, to distinguish a “clause body” and a “clause text”, but I have no better wording.) See some thoughts on this subject here: https://gitlab.com/Lockywolf/study_notes/-/tree/master/2023-07-02_numbered-well-structured-LaTeX/2023-04-11_improvised-method
In this write-up I do not want to spend a lot of effort on describing how to transform a bad paper first into an interlineated papper, and later into a good paper. For this I have a separate article, that is not yet finished: How to write papers in LaTeX.
That is an even bigger of an issue, is it not?
I am giving the following pairwise incomparable options:
mathpix
or other OCR.org-noter
to attach annotations to certain pieces of the document.
So far, solutions implementing options 2.2, 2.3, and 3 are unknown to me. Options 1 and 2.1 are incomparable, because they require an incomparable amount of work. Option 1 is far more flexible, but option 2 allows you to start annotating right away.
This is an interesting use-case. I have not seen papers written in HTML originally, with an exception of SRFI documents of the Scheme Community Process. HTML opens a lot of opportunities for annotation which are better than those of TeX, such as text expandable on click (which is much better than text-on-hover, or text-on-sticky-notes). Still, there probably will be a need for at least three versions of the paper: original, annotated, and improved.
For the same reason Richard Stallman started the Free Software movement.
Wasting time on reverse-engineering computer games and device drivers, even though it is also stupid, at least has some motivation behind it, after all, computers run binary.
There is no reason why articles, especially those which are published as TeX on Arxiv, or those which are published at author’s expense along the OpenAccess model, should be set in stone once a “release” is done.
Articles should follow the software development model, with pull-requests, patch review, automatic testing for consistency, and a set of guidelines on what is an API/ABI breakage and versioning.
Moreover, retracting a paper should not merely be a stamp of disapproval, but a peer-reviewed patch, which highlights exactly the place where there is a flaw, with the typology of the flaw indicated, so that automatic search for similarly-flawed articles can be conducted.
When making a database of “pieces of knowledge”, we need an entry to have at least the following fields or field groups:
org-noter
notesIf we have a database of “articles”, a database of pieces of knowledge, we, quite naturally, might want to interlink them.
This would be mimicking the Web, or human (or, rather, artificial) brain, or some other semantic network.
Sometimes articles are released as “version 2.0”, and books quite often get a “Second Edition”. Another example of a dependent article is a solution book for a problem book, or a conference presentation for a paper.
That is what bibtex was originally for. If you have tex sources for many article, with bib files included, you can draw a network of citations. I am not sure how exactly you would do that for articles for which bib files are not available, which is the case for most articles other than Arxiv ones, so usefulness of this feature is dubious.
What I do want, however, is to be able to cite articles from the database using a hotkey, similar to reftex, and assemble a bib file for later upload to Arxiv.
Reading list quite naturally go hand in hand with the concept of a “project”. What is a “project”? It is hard to define a project precisely, but for theorists and for humanities scholars, a “project” is most likely to include a set of books or articles to read, and a set of claims to prove or discursively argue for or against. (For experimental disciplines things are more involved.)
From the paragraph above, it is already quite visible that Org-mode is quite naturally mapping the concept of a project.
When you have a project, say, you want to prove a certain theorem in Engineering Communications Theory (imaginary field), you might want to grind through a set of articles studying this field, which are usually on Arxiv, so you can annotate them in-place, and more importantly, place indexing markers in some interesting places.
Very often you will not be able to understand some theorems from a paper without background reading, so very soon you will, quite naturally, arrive to a graph of concepts. (I am not sure whether it can be called a “knowledge graph”, as I have seen that term used to describe a specific thing.) A theorem from a paper would require some (linked) reading to be understood. That “linked reading” would be in some other paper or book. If that book is not available as source, linking is likely to be done to the annotation file, or an annotated pdf.
So, a “project” will be a “concept graph”, which will be referring to the concepts of the underlying papers/books somehow. Making this graph is, seemingly, much easier than making a bibliographic citation graph, because, even if you have zero metadata about the paper or book you are reading, you are very likely to read through at least the table of contents, and re-coding the table of contents into a file is negligible in time, compared to the time needed to understand the concepts themselves.
Aha! I have mentioned something without explicitly saying. A Table of Contents is one of the most natural ways of breaking a paper into a skeleton, similar to org-mode’s outline. See the next paragraph.
So, I have mentioned a few ways for grinding through scientific material, which eventually should lead to the creation of a new piece of knowledge.
A “project” is a set of articles to read, and a set of concepts to define. Ideas for new concepts arise from consumed article, and the need to read more articles arises from the need to understand concepts, from reading an “incoming” list, and from citations by other articles.
When we want to visualise what is going on, we will quite naturally see three kinds of links between “Pieces of Knowledge”. (I am abusing notation here. From now on, a “Piece of Knowledge” is not just an article, it may be any piece of text that deserves independent study, for example, a chapter, or a section.)
These three kinds of links are:
How exactly a “Concept Map” would map onto a “Ready-made article” is a debatable subject. In some sense, its value is that of the debugging symbols for a binary program. It should greatly improve understanding, but most probably will not happen to be the skeleton of the final paper.
在这篇文章里我要写我阅读了Podman for Devops之后产生的观点。 我希望此短评能帮助大家推理阅读它是否是一个好的想法。
这本书继续我的“学习现代电脑用法”的系列。 本系列中还介绍了关于PAM,SELinux、CMake、Networking、TCP/IP的书。 诚邀读者浏览我的网站。
Podman是一种操作系统层虚拟化工具。 其实,这里“虚拟化”是一个不恰当的名称。 容器技术基本上如下所述。 操作系统内核为所有的进程提供系统资源,比如:内存地图,行程间通讯,CPU内核数量,等等。 在普通的系统里,内核为每个进程提供正确的信息。 在容器里,操作系统内核有意提供被修正的信息,为了给进程造成独立操作系统的感觉。
换句话说,在使用虚拟机的时候,虚拟机为客户机操作系统提供一套完全虚构的硬件观点。 在容器里,所有的硬件资源都是一样的,但是内核观点是不同的。
Podman是一种调整内核观点的前端。 严格来说,这些终端并不是必要的。 Linux默认有几个更低层的工具,让管理员手动调整进程的资源分享。 但是使用低层工具设计一个虚拟环境需要消耗很大的工作量。 Podman的用户经验跟Docker很像。 Docker是普及容器技术最多的项目。 但是RedHat需要有自己的可控制的容器前端,所以创建了Podman。
这本书是由三个部分组成的:摘要和基本操作、创造新容器、podman和其它软件合作。
我不能说这本书很详细的描述Linux容器框架。 它描述比较高的层次,但是不介绍具体的命名空间结构。 可能是为了使用podman不需要非常详细的理解namespace,但是我预期这种书把基本信息应该介绍的更好。
我猜,为了帮助读者判断这本书是否值得阅读,我应该强调缺点,因为毕竟易读性大部分凭借忍耐缺点而不是享受优点。
我最不喜欢的是:这本书很少讨论存储管理,尤其是虚拟文件系统——overlayfs/btrfs. 其实,所有的容器技术有两个重点:存储和网络。 这本书讨论默认的图形驱动(overlayfs),但是不讨论btrfs和zfs。
综合来说,这本书有点松散。 易读,容易理解,脑筋比较享受,但是如果问自己是不是会复述它的逻辑,不一定会回应“是”。
是少两章“base image”和“container registry”线性阅读这本书的时候可以忽略。
一章针对检修故障,虽然作者结案以的调试方法都比较平淡无奇。
但是另一方面,这本书容器创建的过程描述的比较好。
这本书为我介绍了我到目前没有接触过的领域:纯云计算。 这个范式是针对大互联网公司做分布式web服务的。 毕竟这本书的名称包括“For Devops”,但是我原来只见过大概相似“系统管理”的Devops。
我也比较喜欢它介绍POSIX Capabilites的概念。 尤其是,它建议我怎么通过capsh避免设置文件Capabilities。
我也喜欢这本书介绍容器和SELinux的互动。 我已经度过Vermeulen的关于SELinux的书,但是那时我感觉。 现在我能从另一个角度看同样的互动。
还有一个好处:这本书在每个高等章列出需要的知识。
我比较喜欢这本书不忽略podman和其它工具合作。 我已经提到了selinux,但是这本书也包含buildah,skopeo,quarkus,kubernetes和udica. 让读者看更宽的背景。
总体来说,这本书更好使用为参考书,而不是为介绍。 我大概会给它70%的分数。
This document was inspired by a book called “XeLaTeX appliqué aux sciences humaines” by Maïeul Rouquette.
He wrote a fairly comprehensive manual on how to use LaTeX for humanities, explaining a lot of FAQs, including long citations, and such. Really it looks a little like “LaTeX packages I found useful for myself”, however, I wouldn’t look at such a work with arrogance, as although this kind of work can be accomplished by “just carefully reading the documentation”, doing that is not fast and not cheap, and such “seemingly straightforward” manuals in reality encompass a lot of wisdom and save a lot of time.
This is a document for myself, which might have been called “LaTeX for idiots who nevertheless want to do science”. The key attitude in it is that really, idiots should offload as much computation to the computers, and only spend their own limited brains to solve the problems that no computer, no matter how strong, no algorithm, no matter how sophisticated, can solve.
Speaking simply, scientific (LaTeX) documents should be as easy to read, process, and understand as possible, and if some feature LaTeX offers is usually omitted by people submitting papers to scientific journals, this is not a reason to follow their bogus example; this feature should be used in new documents, at least those written for oneself.
В этом документе я хочу описать свои впечатления от компьютерной игры "Скайрим", которую я прошёл в 2023 году, потому что болел, и больше меня ни на что не хватало.
Верный своему принципу "с паршивой овцы хоть шерсти клок", хочу в этом файле изложить свои впечатления.
Игра вышла в 2011 году, больше 10 лет назад, играл я в неё на видеокарте 2014 года, так что опыт должен быть более-менее репрезентативный.
Начать надо с того, что эта игра – полное говно, просто безнадёжное, и ничто её не спасает.
Нельзя сказать, что я совсем не знал, за что берусь. Я играл в "Oblivion", когда он вышел, я играл в другие игры от Bethesda Softworks на том же движке, в разные варианты Fallout, и, кстати, не так давно поднимал "Morrowind", как самый известный пример игры на движке Ogre.
То есть, я примерно представлял себе плюсы и минусы продукта, обычно выпускаемого Bethesda. Но "Скайрим" побил все мои ожидания.
Настолько нудной и наотъебись сделанной игры я не видел давно. Я прямо реально был поражён. Пожалуй, я могу вспомнить единственную настолько же пустую и бессмысленную игру – это Sacred, 2004 года выпуска.
То есть нет, не так. Игра сделана очень качественно, "для некоторого специфического определения, что такое качество".
Качественна она в одном – в том, чтобы удерживать внимание игрока, заставляя наигрывать бесконечные часы тупого грайнда, более всего напоминающего MMORPG. (На полях отметим, что The Elder Scrolls Online вышла позже "Скайрима", то есть даже списать бесконечную скуку на то, что в компании осталось много людей, трудоустроенных после построения MMOG, как-то не получается.)
То есть, я никогда не видел более живого воплощения идеи "красивая оболочка, пустая внутри". Даже "крестоносцы меча и магии", в которых можно было иногда провалиться под карту и рассмотреть убогие лабиринтики снаружи, такого пластикового впечатления не производили.
То есть, есть гигантская карта, я уже не знаю, больше, чем в Обливионе, или меньше, но очень приличная по размеру. На карте разбросаны "достопримечательности". Какой в них смысле? А хер его знает. Единственное их достоинство – это красивые названия.
В нормальной РПГ по этим достопримечательностям протагониста должны провести какие-нибудь квесты, ну или хоть, по контексту должно быть понятно, какую роль в происходящем играют эти руины, башни, посёлки. Ничего подобного в "Скайриме" нет. Достопримечательности наклёпаны методом копипаста, выполняя KPI "нужное количество руин на квадратный километр карты".
Не было же такого в "Обливионе"! Да, в обливионе тоже были бессмысленные пещеры, но для их применения существовало хоть какое-то понимание, зачем они нужны. На осмотр пещер, например, выдавались квесты в ближайших деревнях. В Скайриме если квест на осмотр и выдаётся, то где-то в ничем не связанном с достопримечательностью месте через пол-карты.
Но как "выдаётся"? Персонажи озвучены – это знак большого бюджета, но больше про квесты нельзя сказать ни одного хорошего слова. Текст реплик отсутствует – вещающий персонаж говорит крайне медленно, дослушать его до конца выше сил нормального человека, поэтому реплики пропускаются мимо. А потом где они записаны? А нигде, потому что интерфейс игры делался наотъебись.
Серьёзно, я в шоке. В "Моровинде" был невероятно убогий интерфейс, но в нём хотя бы был "дневник". В "Скайриме" же … с одной стороны, литературными неграми настрочено невероятное количество текстов в книжки, да даже в диалоги, а найти в них нужное – не просто сложно (как в Морровинде), а просто невозможно, потому что ключевые тексты не сохраняются. Эффективные менеджеры делали игру.
Вообще, интерфейс заслуживает отдельного абзаца осуждения. Он плох буквально всем. Сравнить параметры оружия невозможно, уточнить параметры оружия/заклинаний из "быстрого меню" невозможно, есть три разных меню для "избранного", очень похожих, но разных, специально, чтобы запутаться. Посмотреть "что на персонаже надето" – невозможно. Сравнить надетое с запасённым – невозможно. Да даже отличить полезный предмет от бесполезного – невозможно совершенно.
Кто-то может сказать, что в этом есть некий повышенный ролеплей – чтобы не знать, полезен предмет, или бесполезен. Но если уж это и было каким-то хитрым замыслом, то уж тогда можно было бы сделать хоть заклинание какое-нибудь, которое должно было бы объяснять, полезен ли предмет. Как "ясновидение" подсказывает, куда идти. Но такого заклинания нет, то есть, это не часть ролеплея, это просто работа сделана наотъебись.
В целом, про систему магии, да и "умений" в целом тоже нельзя сказать ничего хорошего. Бюрократии много, толку мало. Были же игры раньше, в которых магия была какой-то задорной? "Меч и Магия" хотя бы, или "Готика". Много разных заклинаний, которые все делали разные вещи, и не вызывали ощущения собирания марок.
В "Скайриме" же три "стихии" имеют только косметические различия. Повреждения касанием, "стрела", и "мина" – вот тебе три и весь ассортимент заклинаний. Есть ещё куча бесполезных, но чего о них рассказывать? Даже, например "дыхание под водой", ну, на первый взгляд, классная же штука, должна открывать тебе пол-карты новых локаций и секретов, вроде подводных пещер. Но нет, ни фига. Дыши-не дыши, а под водой ничего нет, поэтому этот скилл бесполезен.
Правда, чего уж там, почти все игровые механики в "Скайриме" оставляют ощущение собирания марок. Какие-то предметы можно как-то там чинить, зачаровывать, улучшать, зелья варить, чего-то ещё такое. Единого места, где это записано, не существует, систематизации нет, если не ходить гуглить каждые 5 минут, и не рисовать деревья крафта вручную. Да и зачем это делать, непонятно, всё равно в результате крафта ничего хорошего не получается.
Правда, не надо думать, что бесполезен только крафт. Бесполезно в "Скайриме" почти всё. Одна из самых фрустрирующих игр в мире. Поубивал каких-нибудь скелетов или разбойников, можешь даже не затеваться смотреть, что там на них висит. Иди сразу к сундуку. Серьёзно, это единственная РПГ, в которой лутать врагов не просто скучно, но и бесполезно.
Кстати, на счёт "поубивал". Враги убиваются не легко, то тоже очень скучно. Баланса игровых тактик нет никакого, на каждую тактику противника существует единственная контр-тактика, поэтому о специализации персонажа можете забыть. Прокачка тут бывает исключительно "правильная" и "неправильная". Ну, и, конечно, как не упомянуть о том, что враги, как это и положено в каждой днищенской игре, "тупые, но меткие". Натурально, в "Готике" за 2000 года враги давали более увлекательные схватки, чем в "Скайриме". Ну, а вишенка на торте – враги в "Скайриме" – стрейфятся. Натурально, как-будто на дворе 1999 год, а мы играем в "Контр-Страйк". Хорошо хоть, не приседают, чтобы уменьшить площадь поражения, как в одной другой известной игре, и не телепортируются к тебе, как в другой. Но в целом, как уже сказано, невероятно скучно с врагами бодаться. Как можно было сделать такую скучную боёвку после "Нового Фаллаута", вне моего понимания.
Ладно, вернёмся к луту. В целом, если задаться такой задачей, то можно прокачать грузоподъёмность персонажа, и попытаться зарабатывать деньги на продаже лута торговцам, но в Скайриме и на пути этой базовой ролевой игровой механики встают косяки системы. Лут стоит жалкие копейки, таскать его туда-сюда через "быстрое перемещение" очень скоро становится так же нудно, как и играть в MMOG.
Ну, напоследок, наверное, можно написать, что в бочке дёгтя не без ложки мёда. Всё-таки в "Скайриме" есть красивые пейзажи. Правда есть. По окрестностям многих городов поначалу хочется походить, чтобы насладиться видами, из которых некоторые даже, таки, нетривиальны. Правда, с ними всё равно не связано никакой игровой механики, и на озёрной ферме нет никаких лодок, а мельницы и лесопилки ничего не делают, но хотя бы посмотреть на них приятно. (Если бы моя видеокарта тянула нормальные настройки.)
В заключение, должен сказать, что у "Скайрима" есть одно неоспоримое достоинство. Он надолго даёт прививку от желания "сесть поиграть во что-нибудь". Сядешь, зачистишь парочку нахер никому ни зачем не нужных данжей (слово-то какое) с тупыми ботами, нудной боёвкой, бесполезными трофеями, которые ещё и никак не продвинут тебя по сюжету, да даже нет такого места, где ты мог был посмотреть статистику "освоения карты мира", и поймёшь, что все эти цифровые развлечение – это занудное дрочерство на пиксели, интеллектуальная ловушка, суррогат отдыха, и бессмысленная трата времени. И станет тебе хорошо, от того, что когда ты не болеешь, ты занимаешься чем-то другим.
这篇文章,与其它在此网站可找到的文章不同,不评价某些技术书。 反而,现在我要关于我学习Maxima的经验讲故事。 我提醒阅读者,这篇文章我尚未没有做完,但是因为我已经到达了某些结果,所以我想存在的经验记录好。
什么是maxima和为什么我决定学习它? Maxima是全世界第二个的计算机代数系统(也称为符号运算系统)。 (第一个计算机代数系统叫做Reduce。)
Maxima是在60年代底基于Lisp类语言写的。 它原来的名字是Macsyma,来自MIT(麻省理工学院)MAC项目。(Man and Computer)
MAC项目引发了很多不同的Lisp和其它关于第一代人工智能的软件。 Macsyma是其项目之一,是在美国能源部的支持下开发的。 (美国能源部是美国政府唯一一个管理非战用核能的机关。)
老macsyma是由传统Lisp演变而来的,后来被使用标准化的Common Lisp重构。 从1998以后更名为maxima而成为自由软件。
因为我的工作需要高级数学推理,所以我想要能够提升我的计算能力的辅助软件。 而且,因为我对Lisp比较熟悉,所以我认为Maxima是合理的选择。
我学习maxima的历史由三个阶段组成。 第一个是追随莫斯科国家大学的物理部的课文“Maxima为理论物理学者”。 第二个是跟编程语言没有关系的阶段:阅读Michel Talagrand的书“量子场论”。 第三个是使用Maxima语言和Common Lisp写作自己需要的Maxima扩展包“随机性函数级数”。
第三个阶段我尚未实现,所以这篇文章没有写完。
为什么我从这本书开始学习maxima? 实际上我不能说这本书是一种很好的学习资料。 它的两位作者都不是专门做程序语言的专家,而是物理学家。 所以从文本容易判断,他们不太理解程序语言的怪点。 比如他们不理解“函数”和“宏”的区别。 (阅读过我关于CMake的短评的读者可能现在有“已经见过”的感觉。) 那为什么我能够容忍?
是因为他们的书有练习题。 对我来说这很头疼。 我见过很多讲的比较好的书籍,但没有任何练习题。 从我的经验来讲,没有练习题严重影响学习效果。 当然,我本身有一些经验,会为自己出练习题。 但是总体来说,我感觉回答作者提供的练习题既快,又有效。
如上所述,该书包含几个练习题。 阅读这本书的主要文本我需要14个小时。 不能说很长时间,有可能因为我学习程序语言的经验比较丰富。
读完主要内容之后我开始解决练习题。 解决问题的过程没有想象的那么简单。 这本书的练习册由5个练习题组成。 第一个和第二个练习题比较容易理解。 只需要仔细的阅读这本书和maxima的官方说明书。 做这两个练习题我需要13个小时,然后在13个小时内我适应了使用大部分maxima的常见结构: 表达式,函数,宏,循环,递归,块,代码生成。
第三个问题要求我作出更多的努力。 是因为它是关于量子力学的,基本上需要我运算算子的函数。 我从大学的时候没有用过量子物理,所以需要刷新我的知识。
如上所述,那本带练习题的书是被理论物理学家写的。 而且第三个和第四个练习题都是关于量子力学。
为了刷新我的量子力学的知识我选了Michel Talagrand的“What is Quantum Field Theory”. 为什么我选了它,并且如果我决定对每个书写短评,为什么我把它提到这里?
第一个问题的答案是:因为在我们的“计算理论社会”,Talagrand被算最尊敬的领袖之一。 实际上,量子力学,尤其是量子场论,跟算法理论和编程不那么近,所以越有意思查询我们的师公对没有关系的领域的看法。 于是,我开始阅读这本书不太厚望好介绍。 越更开心我发现他的第一章的普通的量子力学(不包含量子场论)的介绍写的比较清晰,包括足够的练习题,并且它的数学严格性比一般的物理学书更高。
我很重视他提到几本关于量子理论书,所以不清楚的地方读者可以查看。
我没有阅读整个书,35章中我只阅读了三个介绍普通的量子力学的。 所有现在我的经验不满足写短评的需求。 这是第二个问题的回答。
不过,如果读者对量子理论感兴趣,我可以推荐Talagrand的书。
Talagrand的书让我下次理解练习的题主,但是我还是没有完全回答它们。 综合来说,第一和第二个练习题一起要求我耗15个小时,比只阅读上述的课本差不多。 但是第三和第四个练习题已经令我花30多个小时,要求我理解maxima的格式匹配的算法,完全的阅读maxima的官方说明书,甚至在maxima开发者邮件列表问几个问题。
到底,maxima作者之一帮我写作基础调和振动的代数,而且我花了几天适应他满足练习题的需求,但是还是失败了。 事实上,那时我已经算本项目落空。
这个失败的项目让我考虑很长时间。 甚至maxima的开发者偷偷的建议使用大规模商业软件做数学分析,比如Mathematica或Maple。 怎么一回事? 怎么办?
我在未来还是思考学写FriCAS,另外一个数学软件。 那个FriCAS比maxima更大,更复杂,也不一定会满足我的需求。
我是否还是需要学Mathematica? 我在过去查一次Mathematica,感觉哭笑不得:什么都会,但是在所有的方面被限制的软件。
方正,我发现了Maxima的VTK支持好像几年没有维护,在2023用不了。 让我不开心。
不过,什么样的任务Maxima会做好?
虽然量子力学不是Maxima的有点,很多任务它会完成。 比如符号微分学和积分学,为分方程式。 第一年,第二年的学生们可能获益使用它做作业。 我也见很多课本使用它插图力学:静力学和动力学。 互联网上也可以找很优秀的课程:如何使用Maxima做经济模型。
Maxima的Fortran和C接口和比较满意,可以使用Maxima分析问题之后生成Fortran或C++的代码。
我希望在未来maxima会为我的服务,比如做插图或者可视化函数。 但是现在的用户经验比我预计不那么好。
Headline | Time | ||
---|---|---|---|
Total time | 13:04 | ||
Решение | 13:04 | ||
Найти 20 ортогональных полиномов… | 11:25 | ||
Используя формулу Родрига | 3:55 | ||
Используя рекуррентную формулу | 0:47 | ||
Используя производящую функцию | 1:27 | ||
Используя гипергеометрическую функцию… | 5:16 | ||
Сделать трансляцию на язык Си… | 1:39 |
В этом файле я бы хотел решить несколько задач по математике с использованием системы символьных вычислений Maxima. Задачи взяты из учебника "Ильина-Силаев".
[0/1]
Формула Родрига: https://en.wikipedia.org/wiki/Rodrigues%27_formula
\[ \frac{(-1)^{n}}{2^{n}\cdot n!} \frac{\partial}{dx}(1 - x^{2})^{n} \]
P_rodrigues[n]:= ev(((-1)^n)/((2^n) * (n!)) * diff((1 - x^2)^n, x, n), diff, expand);
P_rodrigues[n]:= ev(((-1)^n)/((2^n) * (n!)) * diff((1 - x^2)^n, x, n), diff, expand); for i:0 thru 19 do ( simp:true, fr:rat(P_rodrigues[i]), simp:false, tex((1/denom(fr)) * num(fr)));
\[ {{1}\over{1}}\,1 \] \[ {{1}\over{1}}\,x \] \[ {{1}\over{2}}\,\left(3\,x^2-1\right) \] \[ {{1}\over{2}}\,\left(5\,x^3-3\,x\right) \] \[ {{1}\over{8}}\,\left(35\,x^4-30\,x^2+3\right) \] \[ {{1}\over{8}}\,\left(63\,x^5-70\,x^3+15\,x\right) \] \[ {{1}\over{16}}\,\left(231\,x^6-315\,x^4+105\,x^2-5\right) \] \[ {{1}\over{16}}\,\left(429\,x^7-693\,x^5+315\,x^3-35\,x\right) \] \[ {{1}\over{128}}\,\left(6435\,x^8-12012\,x^6+6930\,x^4-1260\,x^2+35 \right) \] \[ {{1}\over{128}}\,\left(12155\,x^9-25740\,x^7+18018\,x^5-4620\,x^3+ 315\,x\right) \] \[ {{1}\over{256}}\,\left(46189\,x^{10}-109395\,x^8+90090\,x^6-30030 \,x^4+3465\,x^2-63\right) \] \[ {{1}\over{256}}\,\left(88179\,x^{11}-230945\,x^9+218790\,x^7-90090 \,x^5+15015\,x^3-693\,x\right) \] \[ {{1}\over{1024}}\,\left(676039\,x^{12}-1939938\,x^{10}+2078505\,x^ 8-1021020\,x^6+225225\,x^4-18018\,x^2+231\right) \] \[ {{1}\over{1024}}\,\left(1300075\,x^{13}-4056234\,x^{11}+4849845\,x ^9-2771340\,x^7+765765\,x^5-90090\,x^3+3003\,x\right) \] \[ {{1}\over{2048}}\,\left(5014575\,x^{14}-16900975\,x^{12}+22309287 \,x^{10}-14549535\,x^8+4849845\,x^6-765765\,x^4+45045\,x^2-429 \right) \] \[ {{1}\over{2048}}\,\left(9694845\,x^{15}-35102025\,x^{13}+50702925 \,x^{11}-37182145\,x^9+14549535\,x^7-2909907\,x^5+255255\,x^3-6435\, x\right) \] Unable to evaluate predicate 1 + (1 + (1
– an error. To debug this try: debugmode(true);
Очень любопытно, что на степени 16 случилось переполнение непонятно чего.
P_rodrigues[n]:= ev(((-1)^n)/((2^n) * (n!)) * diff((1 - x^2)^n, x, n), diff, expand); for i:0 thru 19 do tex(P_rodrigues[i]);
\[ 1 \] \[ x \] \[ {{3\,x^2}\over{2}}-{{1}\over{2}} \] \[ {{5\,x^3}\over{2}}-{{3\,x}\over{2}} \] \[ {{35\,x^4}\over{8}}-{{15\,x^2}\over{4}}+{{3}\over{8}} \] \[ {{63\,x^5}\over{8}}-{{35\,x^3}\over{4}}+{{15\,x}\over{8}} \] \[ {{231\,x^6}\over{16}}-{{315\,x^4}\over{16}}+{{105\,x^2}\over{16}}- {{5}\over{16}} \] \[ {{429\,x^7}\over{16}}-{{693\,x^5}\over{16}}+{{315\,x^3}\over{16}}- {{35\,x}\over{16}} \] \[ {{6435\,x^8}\over{128}}-{{3003\,x^6}\over{32}}+{{3465\,x^4}\over{ 64}}-{{315\,x^2}\over{32}}+{{35}\over{128}} \] \[ {{12155\,x^9}\over{128}}-{{6435\,x^7}\over{32}}+{{9009\,x^5}\over{ 64}}-{{1155\,x^3}\over{32}}+{{315\,x}\over{128}} \] \[ {{46189\,x^{10}}\over{256}}-{{109395\,x^8}\over{256}}+{{45045\,x^6 }\over{128}}-{{15015\,x^4}\over{128}}+{{3465\,x^2}\over{256}}-{{63 }\over{256}} \] \[ {{88179\,x^{11}}\over{256}}-{{230945\,x^9}\over{256}}+{{109395\,x^ 7}\over{128}}-{{45045\,x^5}\over{128}}+{{15015\,x^3}\over{256}}-{{ 693\,x}\over{256}} \] \[ {{676039\,x^{12}}\over{1024}}-{{969969\,x^{10}}\over{512}}+{{ 2078505\,x^8}\over{1024}}-{{255255\,x^6}\over{256}}+{{225225\,x^4 }\over{1024}}-{{9009\,x^2}\over{512}}+{{231}\over{1024}} \] \[ {{1300075\,x^{13}}\over{1024}}-{{2028117\,x^{11}}\over{512}}+{{ 4849845\,x^9}\over{1024}}-{{692835\,x^7}\over{256}}+{{765765\,x^5 }\over{1024}}-{{45045\,x^3}\over{512}}+{{3003\,x}\over{1024}} \] \[ {{5014575\,x^{14}}\over{2048}}-{{16900975\,x^{12}}\over{2048}}+{{ 22309287\,x^{10}}\over{2048}}-{{14549535\,x^8}\over{2048}}+{{4849845 \,x^6}\over{2048}}-{{765765\,x^4}\over{2048}}+{{45045\,x^2}\over{ 2048}}-{{429}\over{2048}} \] \[ {{9694845\,x^{15}}\over{2048}}-{{35102025\,x^{13}}\over{2048}}+{{ 50702925\,x^{11}}\over{2048}}-{{37182145\,x^9}\over{2048}}+{{ 14549535\,x^7}\over{2048}}-{{2909907\,x^5}\over{2048}}+{{255255\,x^3 }\over{2048}}-{{6435\,x}\over{2048}} \] \[ {{300540195\,x^{16}}\over{32768}}-{{145422675\,x^{14}}\over{4096}} +{{456326325\,x^{12}}\over{8192}}-{{185910725\,x^{10}}\over{4096}}+ {{334639305\,x^8}\over{16384}}-{{20369349\,x^6}\over{4096}}+{{ 4849845\,x^4}\over{8192}}-{{109395\,x^2}\over{4096}}+{{6435}\over{ 32768}} \] \[ {{583401555\,x^{17}}\over{32768}}-{{300540195\,x^{15}}\over{4096}} +{{1017958725\,x^{13}}\over{8192}}-{{456326325\,x^{11}}\over{4096}}+ {{929553625\,x^9}\over{16384}}-{{66927861\,x^7}\over{4096}}+{{ 20369349\,x^5}\over{8192}}-{{692835\,x^3}\over{4096}}+{{109395\,x }\over{32768}} \] \[ {{2268783825\,x^{18}}\over{65536}}-{{9917826435\,x^{16}}\over{ 65536}}+{{4508102925\,x^{14}}\over{16384}}-{{4411154475\,x^{12} }\over{16384}}+{{5019589575\,x^{10}}\over{32768}}-{{1673196525\,x^8 }\over{32768}}+{{156165009\,x^6}\over{16384}}-{{14549535\,x^4}\over{ 16384}}+{{2078505\,x^2}\over{65536}}-{{12155}\over{65536}} \] \[ {{4418157975\,x^{19}}\over{65536}}-{{20419054425\,x^{17}}\over{ 65536}}+{{9917826435\,x^{15}}\over{16384}}-{{10518906825\,x^{13} }\over{16384}}+{{13233463425\,x^{11}}\over{32768}}-{{5019589575\,x^9 }\over{32768}}+{{557732175\,x^7}\over{16384}}-{{66927861\,x^5}\over{ 16384}}+{{14549535\,x^3}\over{65536}}-{{230945\,x}\over{65536}} \]
Рекуррентная формула, взятая из того же самого учебного пособия Ильина-Силаев:
\[(n+1)P_{n+1} = (2n + 1)xP_{n}-nP_{n-1}\]
Рекурсия неплохо работает с мемоизацией, по-идее. Давайте попробуем помемоизировать.
\[n P_{n} = (2n-1)xP_{n-1}-(n-1)P_{n-2}\]
P_recurrent[n] := expand(((2*n - 1)*x*P_recurrent[n-1] - (n-1)*P_recurrent[n-2])/(n)); P_recurrent[0] : 1; P_recurrent[1] : x;
P_recurrent[n] := expand(((2*n - 1)*x*P_recurrent[n-1] - (n-1)*P_recurrent[n-2])/(n)); P_recurrent[0] : 1; P_recurrent[1] : x; for i:0 thru 19 do tex(P_recurrent[i]);
\[ 1 \] \[ x \] \[ {{3\,x^2-1}\over{2}} \] \[ {{5\,x^3-3\,x}\over{2}} \] \[ {{35\,x^4-30\,x^2+3}\over{8}} \] \[ {{63\,x^5-70\,x^3+15\,x}\over{8}} \] \[ {{231\,x^6-315\,x^4+105\,x^2-5}\over{16}} \] \[ {{429\,x^7-693\,x^5+315\,x^3-35\,x}\over{16}} \] \[ {{6435\,x^8-12012\,x^6+6930\,x^4-1260\,x^2+35}\over{128}} \] \[ {{12155\,x^9-25740\,x^7+18018\,x^5-4620\,x^3+315\,x}\over{128}} \] \[ {{46189\,x^{10}-109395\,x^8+90090\,x^6-30030\,x^4+3465\,x^2-63 }\over{256}} \] \[ {{88179\,x^{11}-230945\,x^9+218790\,x^7-90090\,x^5+15015\,x^3-693 \,x}\over{256}} \] \[ {{676039\,x^{12}-1939938\,x^{10}+2078505\,x^8-1021020\,x^6+225225 \,x^4-18018\,x^2+231}\over{1024}} \] \[ {{1300075\,x^{13}-4056234\,x^{11}+4849845\,x^9-2771340\,x^7+765765 \,x^5-90090\,x^3+3003\,x}\over{1024}} \] \[ {{5014575\,x^{14}-16900975\,x^{12}+22309287\,x^{10}-14549535\,x^8+ 4849845\,x^6-765765\,x^4+45045\,x^2-429}\over{2048}} \] \[ {{9694845\,x^{15}-35102025\,x^{13}+50702925\,x^{11}-37182145\,x^9+ 14549535\,x^7-2909907\,x^5+255255\,x^3-6435\,x}\over{2048}} \] \[ {{300540195\,x^{16}-1163381400\,x^{14}+1825305300\,x^{12}- 1487285800\,x^{10}+669278610\,x^8-162954792\,x^6+19399380\,x^4- 875160\,x^2+6435}\over{32768}} \] \[ {{583401555\,x^{17}-2404321560\,x^{15}+4071834900\,x^{13}- 3650610600\,x^{11}+1859107250\,x^9-535422888\,x^7+81477396\,x^5- 5542680\,x^3+109395\,x}\over{32768}} \] \[ {{2268783825\,x^{18}-9917826435\,x^{16}+18032411700\,x^{14}- 17644617900\,x^{12}+10039179150\,x^{10}-3346393050\,x^8+624660036\,x ^6-58198140\,x^4+2078505\,x^2-12155}\over{65536}} \] \[ {{4418157975\,x^{19}-20419054425\,x^{17}+39671305740\,x^{15}- 42075627300\,x^{13}+26466926850\,x^{11}-10039179150\,x^9+2230928700 \,x^7-267711444\,x^5+14549535\,x^3-230945\,x}\over{65536}} \]
И всё?
Производящая функция – это вот эта функция: https://en.wikipedia.org/wiki/Generating_function
На википедии есть прямо инструкция о том, как её использовать для того, чтобы вычислить полиномы Лежандра:
https://en.wikipedia.org/wiki/Legendre_polynomials#Definition_via_generating_function
\[ \frac{1}{\sqrt{1-2xt+t^2}} = \sum_{n=0}^\infty P_n(x) t^n \]
P_generating[n]:= expand(part(taylor((1)/(sqrt(1 - 2*x*t + t^2)),t,0,19), [i+1])/(t^(i+1-1))) ;
P_generating[n]:= expand(part(taylor((1)/(sqrt(1 - 2*x*t + t^2)),t,0,19), [i+1])/(t^(i+1-1))) ; for i:0 thru 19 do tex(P_generating[i]);
\[ 1 \] \[ x \] \[ {{3\,x^2-1}\over{2}} \] \[ {{5\,x^3-3\,x}\over{2}} \] \[ {{35\,x^4-30\,x^2+3}\over{8}} \] \[ {{63\,x^5-70\,x^3+15\,x}\over{8}} \] \[ {{231\,x^6-315\,x^4+105\,x^2-5}\over{16}} \] \[ {{429\,x^7-693\,x^5+315\,x^3-35\,x}\over{16}} \] \[ {{6435\,x^8-12012\,x^6+6930\,x^4-1260\,x^2+35}\over{128}} \] \[ {{12155\,x^9-25740\,x^7+18018\,x^5-4620\,x^3+315\,x}\over{128}} \] \[ {{46189\,x^{10}-109395\,x^8+90090\,x^6-30030\,x^4+3465\,x^2-63 }\over{256}} \] \[ {{88179\,x^{11}-230945\,x^9+218790\,x^7-90090\,x^5+15015\,x^3-693 \,x}\over{256}} \] \[ {{676039\,x^{12}-1939938\,x^{10}+2078505\,x^8-1021020\,x^6+225225 \,x^4-18018\,x^2+231}\over{1024}} \] \[ {{1300075\,x^{13}-4056234\,x^{11}+4849845\,x^9-2771340\,x^7+765765 \,x^5-90090\,x^3+3003\,x}\over{1024}} \] \[ {{5014575\,x^{14}-16900975\,x^{12}+22309287\,x^{10}-14549535\,x^8+ 4849845\,x^6-765765\,x^4+45045\,x^2-429}\over{2048}} \] \[ {{9694845\,x^{15}-35102025\,x^{13}+50702925\,x^{11}-37182145\,x^9+ 14549535\,x^7-2909907\,x^5+255255\,x^3-6435\,x}\over{2048}} \] \[ {{300540195\,x^{16}-1163381400\,x^{14}+1825305300\,x^{12}- 1487285800\,x^{10}+669278610\,x^8-162954792\,x^6+19399380\,x^4- 875160\,x^2+6435}\over{32768}} \] \[ {{583401555\,x^{17}-2404321560\,x^{15}+4071834900\,x^{13}- 3650610600\,x^{11}+1859107250\,x^9-535422888\,x^7+81477396\,x^5- 5542680\,x^3+109395\,x}\over{32768}} \] \[ {{2268783825\,x^{18}-9917826435\,x^{16}+18032411700\,x^{14}- 17644617900\,x^{12}+10039179150\,x^{10}-3346393050\,x^8+624660036\,x ^6-58198140\,x^4+2078505\,x^2-12155}\over{65536}} \] \[ {{4418157975\,x^{19}-20419054425\,x^{17}+39671305740\,x^{15}- 42075627300\,x^{13}+26466926850\,x^{11}-10039179150\,x^9+2230928700 \,x^7-267711444\,x^5+14549535\,x^3-230945\,x}\over{65536}} \]
Гипергеометрическая функция – это очень классная функция, см https://en.wikipedia.org/wiki/Hypergeometric_function и https://dlmf.nist.gov/15 .
\[ F(a,b,c,z) = \sum_{n=0}^{\infty} \frac{(a)_{n}(b)_{n}}{(c)_{n}} \frac{z^{n}}{n!} \]
где \((a)_{n}\) – символ Похгаммера
\[(a)_{n}=\prod_{i=0}^{n-1}(a+i) \]
Выражение через неё полиномов Лежандра делается так:
\[ P_{n}= F\left(-n,n+1,1,\frac{1-x}{2}\right) \]
Радует, что в Maxima есть функция pochhammer.
Попробуем для начала реализовать гипергеометрическую функцию с нуля?
showtime: true; load("simplify_sum"); F: sum( ((pochhammer(a,n)*pochhammer(b,n))/(pochhammer(c,n)))*((z^n)/(n!)) , n , 0 , inf ); tex(F); P: subst([a=-p,b=p+1,c=1,z=((1-x)/2)], F); simplify_sum(subst([p=1],P));
\[ \sum_{n=0}^{\infty }{{{\left(a\right)_{n}\,\left(b\right)_{n}\,z^{ n}}\over{\left(c\right)_{n}\,n!}}} \] (- 1) (2) (x - 1) n + 1 n + 1
Evaluation took 0.9851 seconds (1.0134 elapsed) using 36.178 MB.
Что-то результат так себе. Попробуем взять готовую гипергеометрическую функцию.
P_hypergeometric[n] := ev(hypergeometric([-n,n+1],[1],((1-x)/2)), diff, expand, simp);
P_hypergeometric[n] := ev(hypergeometric([-n,n+1],[1],((1-x)/2)), diff, expand, simp); for i:0 thru 19 do tex(P_hypergeometric[i]);
\[ 1 \] \[ x \] \[ {{3\,x^2}\over{2}}-{{1}\over{2}} \] \[ {{5\,x^3}\over{2}}-{{3\,x}\over{2}} \] \[ {{35\,x^4}\over{8}}-{{15\,x^2}\over{4}}+{{3}\over{8}} \] \[ {{63\,x^5}\over{8}}-{{35\,x^3}\over{4}}+{{15\,x}\over{8}} \] \[ {{231\,x^6}\over{16}}-{{315\,x^4}\over{16}}+{{105\,x^2}\over{16}}- {{5}\over{16}} \] \[ {{429\,x^7}\over{16}}-{{693\,x^5}\over{16}}+{{315\,x^3}\over{16}}- {{35\,x}\over{16}} \] \[ {{6435\,x^8}\over{128}}-{{3003\,x^6}\over{32}}+{{3465\,x^4}\over{ 64}}-{{315\,x^2}\over{32}}+{{35}\over{128}} \] \[ {{12155\,x^9}\over{128}}-{{6435\,x^7}\over{32}}+{{9009\,x^5}\over{ 64}}-{{1155\,x^3}\over{32}}+{{315\,x}\over{128}} \] \[ {{46189\,x^{10}}\over{256}}-{{109395\,x^8}\over{256}}+{{45045\,x^6 }\over{128}}-{{15015\,x^4}\over{128}}+{{3465\,x^2}\over{256}}-{{63 }\over{256}} \] \[ {{88179\,x^{11}}\over{256}}-{{230945\,x^9}\over{256}}+{{109395\,x^ 7}\over{128}}-{{45045\,x^5}\over{128}}+{{15015\,x^3}\over{256}}-{{ 693\,x}\over{256}} \] \[ {{676039\,x^{12}}\over{1024}}-{{969969\,x^{10}}\over{512}}+{{ 2078505\,x^8}\over{1024}}-{{255255\,x^6}\over{256}}+{{225225\,x^4 }\over{1024}}-{{9009\,x^2}\over{512}}+{{231}\over{1024}} \] \[ {{1300075\,x^{13}}\over{1024}}-{{2028117\,x^{11}}\over{512}}+{{ 4849845\,x^9}\over{1024}}-{{692835\,x^7}\over{256}}+{{765765\,x^5 }\over{1024}}-{{45045\,x^3}\over{512}}+{{3003\,x}\over{1024}} \] \[ {{5014575\,x^{14}}\over{2048}}-{{16900975\,x^{12}}\over{2048}}+{{ 22309287\,x^{10}}\over{2048}}-{{14549535\,x^8}\over{2048}}+{{4849845 \,x^6}\over{2048}}-{{765765\,x^4}\over{2048}}+{{45045\,x^2}\over{ 2048}}-{{429}\over{2048}} \] \[ {{9694845\,x^{15}}\over{2048}}-{{35102025\,x^{13}}\over{2048}}+{{ 50702925\,x^{11}}\over{2048}}-{{37182145\,x^9}\over{2048}}+{{ 14549535\,x^7}\over{2048}}-{{2909907\,x^5}\over{2048}}+{{255255\,x^3 }\over{2048}}-{{6435\,x}\over{2048}} \] \[ {{300540195\,x^{16}}\over{32768}}-{{145422675\,x^{14}}\over{4096}} +{{456326325\,x^{12}}\over{8192}}-{{185910725\,x^{10}}\over{4096}}+ {{334639305\,x^8}\over{16384}}-{{20369349\,x^6}\over{4096}}+{{ 4849845\,x^4}\over{8192}}-{{109395\,x^2}\over{4096}}+{{6435}\over{ 32768}} \] \[ {{583401555\,x^{17}}\over{32768}}-{{300540195\,x^{15}}\over{4096}} +{{1017958725\,x^{13}}\over{8192}}-{{456326325\,x^{11}}\over{4096}}+ {{929553625\,x^9}\over{16384}}-{{66927861\,x^7}\over{4096}}+{{ 20369349\,x^5}\over{8192}}-{{692835\,x^3}\over{4096}}+{{109395\,x }\over{32768}} \] \[ {{2268783825\,x^{18}}\over{65536}}-{{9917826435\,x^{16}}\over{ 65536}}+{{4508102925\,x^{14}}\over{16384}}-{{4411154475\,x^{12} }\over{16384}}+{{5019589575\,x^{10}}\over{32768}}-{{1673196525\,x^8 }\over{32768}}+{{156165009\,x^6}\over{16384}}-{{14549535\,x^4}\over{ 16384}}+{{2078505\,x^2}\over{65536}}-{{12155}\over{65536}} \] \[ {{4418157975\,x^{19}}\over{65536}}-{{20419054425\,x^{17}}\over{ 65536}}+{{9917826435\,x^{15}}\over{16384}}-{{10518906825\,x^{13} }\over{16384}}+{{13233463425\,x^{11}}\over{32768}}-{{5019589575\,x^9 }\over{32768}}+{{557732175\,x^7}\over{16384}}-{{66927861\,x^5}\over{ 16384}}+{{14549535\,x^3}\over{65536}}-{{230945\,x}\over{65536}} \]
Второй подход оказался намного быстрее.
Okay, basically I will solve it by:
I1: elapsed_real_time(); P_rodrigues[n]:= ev(((-1)^n)/((2^n) * (n!)) * diff((1 - x^2)^n, x, n), diff, expand); I2: elapsed_real_time(); P_recurrent[n] := expand(((2*n - 1)*x*P_recurrent[n-1] - (n-1)*P_recurrent[n-2])/(n)); P_recurrent[0] : 1; P_recurrent[1] : x; I3: elapsed_real_time(); P_generating[n]:= expand(part(taylor((1)/(sqrt(1 - 2*x*t + t^2)),t,0,19), [i+1])/(t^(i+1-1))) ; I4: elapsed_real_time(); P_hypergeometric[n] := ev(hypergeometric([-n,n+1],[1],((1-x)/2)), diff, expand, simp); I5: elapsed_real_time(); for i:0 thru 19 do block([], /* tex(P_rodrigues[i]),*/ /* tex(P_recurrent[i]), */ ldisplay( rat(P_recurrent[i] - P_rodrigues[i]) = 0 ), /* tex(P_generating[i]), */ ldisplay( rat(P_generating[i] - P_recurrent[i]) = 0 ), /* tex(P_hypergeometric[i]), */ ldisplay( rat(P_hypergeometric[i] - P_generating[i]) = 0 ), printf(true, "~%----------------------~%"));
(%t1)/R/ 0 = 0 (%t2)/R/ 0 = 0 (%t3)/R/ 0 = 0
(%t4)/R/ 0 = 0 (%t5)/R/ 0 = 0 (%t6)/R/ 0 = 0
(%t7)/R/ 0 = 0 (%t8)/R/ 0 = 0 (%t9)/R/ 0 = 0
(%t10)/R/ 0 = 0 (%t11)/R/ 0 = 0 (%t12)/R/ 0 = 0
(%t13)/R/ 0 = 0 (%t14)/R/ 0 = 0 (%t15)/R/ 0 = 0
(%t16)/R/ 0 = 0 (%t17)/R/ 0 = 0 (%t18)/R/ 0 = 0
(%t19)/R/ 0 = 0 (%t20)/R/ 0 = 0 (%t21)/R/ 0 = 0
(%t22)/R/ 0 = 0 (%t23)/R/ 0 = 0 (%t24)/R/ 0 = 0
(%t25)/R/ 0 = 0 (%t26)/R/ 0 = 0 (%t27)/R/ 0 = 0
(%t28)/R/ 0 = 0 (%t29)/R/ 0 = 0 (%t30)/R/ 0 = 0
(%t31)/R/ 0 = 0 (%t32)/R/ 0 = 0 (%t33)/R/ 0 = 0
(%t34)/R/ 0 = 0 (%t35)/R/ 0 = 0 (%t36)/R/ 0 = 0
(%t37)/R/ 0 = 0 (%t38)/R/ 0 = 0 (%t39)/R/ 0 = 0
(%t40)/R/ 0 = 0 (%t41)/R/ 0 = 0 (%t42)/R/ 0 = 0
(%t43)/R/ 0 = 0 (%t44)/R/ 0 = 0 (%t45)/R/ 0 = 0
(%t46)/R/ 0 = 0 (%t47)/R/ 0 = 0 (%t48)/R/ 0 = 0
(%t49)/R/ 0 = 0 (%t50)/R/ 0 = 0 (%t51)/R/ 0 = 0
(%t52)/R/ 0 = 0 (%t53)/R/ 0 = 0 (%t54)/R/ 0 = 0
(%t55)/R/ 0 = 0 (%t56)/R/ 0 = 0 (%t57)/R/ 0 = 0
(%t58)/R/ 0 = 0 (%t59)/R/ 0 = 0 (%t60)/R/ 0 = 0
Кажется, всё сходится.
I1: elapsed_real_time(); P_rodrigues[n]:= ev(((-1)^n)/((2^n) * (n!)) * diff((1 - x^2)^n, x, n), diff, expand); I2: elapsed_real_time(); P_recurrent[n] := expand(((2*n - 1)*x*P_recurrent[n-1] - (n-1)*P_recurrent[n-2])/(n)); P_recurrent[0] : 1; P_recurrent[1] : x; I3: elapsed_real_time(); P_generating[n]:= expand(part(taylor((1)/(sqrt(1 - 2*x*t + t^2)),t,0,19), [i+1])/(t^(i+1-1))) ; I4: elapsed_real_time(); P_hypergeometric[n] := ev(hypergeometric([-n,n+1],[1],((1-x)/2)), diff, expand, simp); I5: elapsed_real_time(); printf(true, "rodrigues__time=~f~%", float(I2-I1)); printf(true, "recurrent__time=~f~%", float(I3)-float(I2)); printf(true, "generating_time=~f~%", float(I4)-float(I3)); printf(true, "hypergeome_time=~f~%", float(I5)-float(I4)); measure_time(expression) ::= block( [time: elapsed_real_time(), dummy:0], dummy:ev(expression) , elapsed_real_time() - time ); results: matrix( ['rodrigues], ['recurrent], ['generating], ['hypergeometric]); fpprintprec: 3; for i:0 thru 19 do ( /* ldisplay( P_rodrigues[i] ), */ /* ldisplay( P_recurrent[i] ), */ /* ldisplay( P_generating[i] ), */ /* ldisplay( P_hypergeometric[i] ), */ /* printf(true, "~%----------------------~%") */ results: addcol( results , [ measure_time(P_rodrigues[i]), measure_time(P_recurrent[i]), measure_time(P_generating[i]), measure_time(P_hypergeometric[i]) ] ) ); texput(matrix, lambda([m], block( [rows : length(m), cols : length(m[1]), r, c, s: ""], s : sconcat("\\begin{pmatrix}", newline), for r:1 thru rows do ( for c:1 thru cols do ( s : sconcat(s, tex1(m[r][c]), " & ") ), s : sconcat(s, "\\\\" , newline) ), sconcat(s, "\\end{pmatrix}" ) ))); disp(sconcat( "\\[ ", tex1(results), " \\]" ));
rodrigues__time=0.0006490000000000003 recurrent__time=0.0008319999999999994 generating_time=0.0006519999999999998 hypergeome_time=0.00048000000000000126 \[ \begin{pmatrix} {\it rodrigues} & 3.72 \times 10^{-4} & 2.88 \times 10^{-4} & 5.52 \times 10^{\ -4} & 8.49 \times 10^{-4} & 0.00105 & 0.0015 & 0.00191 & 0.00248 & 0.00326 & 0\ .00457 & 0.00478 & 0.00574 & 0.00683 & 0.00897 & 0.00913 & 0.0108 & 0.0149 & 0\ .0146 & 0.0155 & 0.0173 & \\ {\it recurrent} & 3.6 \times 10^{-5} & 3.7 \times 10^{-5} & 2.56 \times 10^{-4\ } & 5.36 \times 10^{-4} & 6.38 \times 10^{-4} & 7.88 \times 10^{-4} & 9.16 \ti\ mes 10^{-4} & 0.00108 & 0.00107 & 0.00146 & 0.00148 & 0.00166 & 0.00165 & 0.00\ 212 & 0.00187 & 0.00205 & 0.00219 & 0.0028 & 0.00242 & 0.00269 & \\ {\it generating} & 0.00607 & 0.00627 & 0.00589 & 0.00622 & 0.00603 & 0.0127 & \ 0.00626 & 0.00614 & 0.00637 & 0.00632 & 0.00618 & 0.00733 & 0.00634 & 0.00656 \ & 0.00654 & 0.00664 & 0.00706 & 0.00787 & 0.00695 & 0.00736 & \\ {\it hypergeometric} & 0.067 & 4.95 \times 10^{-4} & 8.56 \times 10^{-4} & 0.0\ 0131 & 0.00206 & 0.00306 & 0.00427 & 0.00653 & 0.0071 & 0.00874 & 0.0173 & 0.0\ 137 & 0.0161 & 0.0264 & 0.022 & 0.0316 & 0.0283 & 0.0383 & 0.0441 & 0.0422 & \\ \ \end{pmatrix} \]
Всё это выглядит более-менее пренебрежимо малым, однако, на первый взгляд, кажется, что гипергеометрический метод, а так же метод Родрига могут и подтормаживать.
Программа должна определять две функции, \(f\) и \(T\), являющиеся двумя разными представлениями одной и той же функции: символьным выражением и рядом Тейлора до степени 4
\[ f = \frac{1}{5+\sin(x)} \]
И сравнить результат от 0 до 1.0 с шагом 0.05.
load("gentran"); gentranlang: c; f: 1/(5+ sin(x)); T(x) := part(taylor(f, x, 0,4), [1,2,3,4,5]); print("#include"); print("#include "); print( "double f(double x){ return "); gentran(eval(f)); print( "; }"); print( ""); print( "double T(double x){ return "); gentran(eval(T(x))); print("; }"); print(""); print("int main() { // return 0;"); print("for(double i=0; i<=1; i=i+0.05) { printf(\"%f\\t%f\\t%f\\t%f\\n\", i, f(i), T(i), f(i) - T(i)) ;}"); print("return 0;}");
#include#include double f( double x){ return 1/(5+sin(x)) ; } double T( double x){ return -22.0/9375.0*pow(x,4.0)+19.0/3750.0*pow(x,3.0)+pow(x,2.0)/125.0-1.0/25.0*x+1.0/ 5.0 ; } int main() { // return 0; for( double i=0; i<=1; i=i+0.05) { printf( "%f\t%f\t%f\t%f\n", i, f(i), T(i), f(\ i) - T(i)) ;} return 0;}
Okay, gentran
should do it better than my crude solution here, but I am not sure how to do it better.
Instead gentran_on
does not seem to do what I expect it to do, and going through a stuff with maxima itself does not seem to make a lot of sense to me.
It is nice to know that maxima supports , but until I have a piece of software to use this trick, I have no use for it.
Написать программу, которая вычисляет результат действия произвольной функции операторов на произвольный вектор состояния.
Функция операторов уничтожения \(a_{1},a_{2}\) , действующая на произвольный вектор состояния двумерного линейного гармонического осциллятора:
\[ |\psi> = \sum_{i=i}^{N}\sum_{j=i}^{M}C_{i,j}|i,j>, a_{1}|i,j>_{}= \sqrt{i}|i-1,j> , a_{2}|i,j> = \sqrt{j}|i,j-1> \]
Что-то у меня в формуле выше векторы бра и кет малость поломанные, ну да ладно.
Нужно посчитать \(\sin(a_{1} + a_{2} * a_{1} )[w * |4,8> - (3/7)*|11,3>] \)
Так, что тут можно сделать?
Что нужно вспомнить о квантовой механике человеку, который последний раз видел её больше 10 лет назад?
\(|i>\) – это хитрое обозначение для вектора \((0\ 0\ 0\ \ldots\ 1\ \ldots\ ) \) в дискретном случае. Часто говорят, что это "система, в которой \(i\) частиц", хотя на самом деле гораздо чаще это система, в которой единственная частица находится в \(i\)-раз возбуждённом состоянии.
Вектор состояния у меня будет sparse-матрицей в компьютерной смысле, вестимо.
Что такое \(w\) в этой задаче? Окей, будем считать, что это символьный параметр, который должен в итоге войти в ответ.
В целом, хочется, например, получить результат в виде символьной матрицы?
input[4,8] : w; input[11,3] : -3/7; display(arrayinfo(input)); /* Как задать input одним выражением? */ my_opfun[x]:= block([useless], 1); a1(x) := block([retval] ); display(my_opfun[3]);
arrayinfo(input) = [hashed, 2, [4, 8], [11, 3]] my_opfun = 1 3
test1 : genmatrix( lambda( [n,m] , (if n>m then 0 else 1) * (n*m) ), 10, 10); /*b : lreduce(cons, test1);*/ ldisp(test1); ldisp(map(lambda([ROW],rest(ROW)), test1)); multiplies : genmatrix( lambda( [n,m], \sqrt(m)), 10,10); ldisp(multiplies); ldisp(matrixp(multiplies)); ldisp( test1 * multiplies ); ldisp( matrixp( test1 * multiplies) ); s1 : test1 * multiplies; ldisp(matrixp(s1)); ldisp( addcol(submatrix(s1, 1), zeromatrix(length(s1),1)));
[ 1 2 3 4 5 6 7 8 9 10 ] [ ] [ 0 4 6 8 10 12 14 16 18 20 ] [ ] [ 0 0 9 12 15 18 21 24 27 30 ] [ ] [ 0 0 0 16 20 24 28 32 36 40 ] [ ] [ 0 0 0 0 25 30 35 40 45 50 ] (%t1) [ ] [ 0 0 0 0 0 36 42 48 54 60 ] [ ] [ 0 0 0 0 0 0 49 56 63 70 ] [ ] [ 0 0 0 0 0 0 0 64 72 80 ] [ ] [ 0 0 0 0 0 0 0 0 81 90 ] [ ] [ 0 0 0 0 0 0 0 0 0 100 ] [ 2 3 4 5 6 7 8 9 10 ] [ ] [ 4 6 8 10 12 14 16 18 20 ] [ ] [ 0 9 12 15 18 21 24 27 30 ] [ ] [ 0 0 16 20 24 28 32 36 40 ] [ ] [ 0 0 0 25 30 35 40 45 50 ] (%t2) [ ] [ 0 0 0 0 36 42 48 54 60 ] [ ] [ 0 0 0 0 0 49 56 63 70 ] [ ] [ 0 0 0 0 0 0 64 72 80 ] [ ] [ 0 0 0 0 0 0 0 81 90 ] [ ] [ 0 0 0 0 0 0 0 0 100 ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] (%t3) [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] [ ] [ 3/2 ] [ 1 sqrt(2) sqrt(3) 2 sqrt(5) sqrt(6) sqrt(7) 2 3 sqrt(10) ] (%t4) true [ 3/2 3/2 3/2 3/2 3/2 9/2 3/2 ] [ 1 2 3 8 5 6 7 2 27 10 ] [ ] [ 5/2 3/2 3/2 3/2 3/2 11/2 3/2 ] [ 0 2 2 3 16 2 5 2 6 2 7 2 54 2 10 ] [ ] [ 5/2 3/2 3/2 3/2 9/2 3/2 ] [ 0 0 3 24 3 5 3 6 3 7 3 2 81 3 10 ] [ ] [ 3/2 3/2 3/2 13/2 3/2 ] [ 0 0 0 32 4 5 4 6 4 7 2 108 4 10 ] [ ] [ 5/2 3/2 3/2 9/2 3/2 ] [ 0 0 0 0 5 5 6 5 7 5 2 135 5 10 ] (%t5) [ ] [ 5/2 3/2 11/2 3/2 ] [ 0 0 0 0 0 6 6 7 3 2 162 6 10 ] [ ] [ 5/2 9/2 3/2 ] [ 0 0 0 0 0 0 7 7 2 189 7 10 ] [ ] [ 15/2 3/2 ] [ 0 0 0 0 0 0 0 2 216 8 10 ] [ ] [ 3/2 ] [ 0 0 0 0 0 0 0 0 243 9 10 ] [ ] [ 5/2 ] [ 0 0 0 0 0 0 0 0 0 10 ] (%t6) true (%t7) true [ 3/2 3/2 3/2 3/2 3/2 9/2 3/2 ] [ 2 3 8 5 6 7 2 27 10 0 ] [ ] [ 5/2 3/2 3/2 3/2 3/2 11/2 3/2 ] [ 2 2 3 16 2 5 2 6 2 7 2 54 2 10 0 ] [ ] [ 5/2 3/2 3/2 3/2 9/2 3/2 ] [ 0 3 24 3 5 3 6 3 7 3 2 81 3 10 0 ] [ ] [ 3/2 3/2 3/2 13/2 3/2 ] [ 0 0 32 4 5 4 6 4 7 2 108 4 10 0 ] [ ] [ 5/2 3/2 3/2 9/2 3/2 ] [ 0 0 0 5 5 6 5 7 5 2 135 5 10 0 ] (%t8) [ ] [ 5/2 3/2 11/2 3/2 ] [ 0 0 0 0 6 6 7 3 2 162 6 10 0 ] [ ] [ 5/2 9/2 3/2 ] [ 0 0 0 0 0 7 7 2 189 7 10 0 ] [ ] [ 15/2 3/2 ] [ 0 0 0 0 0 0 2 216 8 10 0 ] [ ] [ 3/2 ] [ 0 0 0 0 0 0 0 243 9 10 0 ] [ ] [ 5/2 ] [ 0 0 0 0 0 0 0 0 10 0 ]
programmode: false; plot2d ([atan(x), erf(x), tanh(x)], [x, -5, 5], [y, -1.5, 2])$
a[2,3]: 1; b[2,3]: 2; s : ev(a+b); ldisp(s); programmode: false; ldisp([1,2,3] + [5,5,5]); map(ldisplay, a);
Loading home/lockywolf.maxima/maxima-init.mac (%t1) b + a (%t2) [6, 7, 8] map: improper argument: a – an error. To debug this try: debugmode(true);
programmode: false; matchfix( "|" , ">" ); matchdeclare(a,numberp); matchdeclare(b,numberp); matchdeclare(c,true); defmatch(ketp, c*|a,b>); ldisp(ketp(|1,2>)); ldisp(ketp(alpha*|1,2>)); matchdeclare(a1,ketp); matchdeclare(a2,ketp); defmatch(qvectorp, a1 + a2); ldisp(qvectorp(t*|1,2> + q*|1,2>)); tellsimp(a1 + a2, (op(a1)+op(a2))); simp:true; ldisp( ev(t*|1,2> + q*|1,2>));
Loading home/lockywolf.maxima/maxima-init.mac read and interpret /tmp/babel-cTZEIJ/maxima-VXgAxR.max set_tex_environment_default("\\[ "," \\]") [\[ , \]] programmode:false false matchfix("|",">")
matchdeclare(a,numberp) done matchdeclare(b,numberp) done matchdeclare(c,true) done defmatch(ketp,c*|a,b>) ketp ldisp(ketp(|1,2>)) (%t9) [c = 1, b = 2, a = 1] [%t9] ldisp(ketp(alpha*|1,2>)) (%t10) [c = alpha, b = 2, a = 1] [%t10] matchdeclare(a1,ketp) done matchdeclare(a2,ketp) done defmatch(qvectorp,a1+a2) qvectorp ldisp(qvectorp(t*|1,2>+q*|1,2>)) (%t14) false [%t14] tellsimp(a1+a2,op(a1)+op(a2)) tellsimp: warning: rule will treat '+' as noncommutative and nonassociative. [+rule1, simplus] simp:true true ldisp(ev(t*|1,2>+q*|1,2>)) (%t17) |1, 2> t + |1, 2> q [%t17] gnuplot_close() quit()
load(partition); matchdeclare (pp, partition_expression("+",constantp,0,"+",g,'ANSpp)); tellsimp(foo(pp),ANSpp); declare([a,b,c],constant); disp([foo(a+b+c+x+y+3),foo(w),foo(b), foo(a*b+x),foo(a*b)]);
Loading home/lockywolf.maxima/maxima-init.mac [g(c + b + a + 3, y + x), foo(w), foo(b), g(a b, x), foo(a b)]
programmode: false; nolabels: false; matchdeclare([ann,bnn], lambda([r],atom(r) and not numberp(r)), cna,lambda([r], not atom(r)), numonly,numberp); defmatch(m1,h(ann,bnn,numonly) ); defrule(r1,h(ann,bnn,numonly),['numonlyA = numonly,'bnnB = bnn,'annC = ann])$ ldisp(m1(h(q,r,34))); ldisp(r1(h(q,r,34))); dadfa; :lisp (symbol-plist '$ann)
Loading home/lockywolf.maxima/maxima-init.mac read and interpret /tmp/babel-cTZEIJ/maxima-aoIKhA.max set_tex_environment_default("\\[ "," \\]") [\[ , \]] programmode:false false nolabels:false false matchdeclare([ann,bnn],lambda([r],atom(r) and not numberp(r)),cna, lambda([r],not atom(r)),numonly,numberp) done defmatch(m1,h(ann,bnn,numonly)) m1 defrule(r1,h(ann,bnn,numonly),['numonlyA = numonly,'bnnB = bnn,'annC = ann]) ldisp(m1(h(q,r,34))) (%t8) [numonly = 34, bnn = r, ann = q] [%t8] ldisp(r1(h(q,r,34))) (%t9) [numonlyA = 34, bnnB = r, annC = q] [%t9] dadfa dadfa (MPROPS (NIL MATCHDECLARE (((LAMBDA (0 /tmp/babel-cTZEIJ/maxima-aoIKhA.max SRC)) ((MLIST (0 /tmp/babel-cTZEIJ/maxima-aoIKhA.max SRC)) $R) ((MAND (0 /tmp/babel-cTZEIJ/maxima-aoIKhA.max SRC)) (($ATOM (0 /tmp/babel-cTZEIJ/maxima-aoIKhA.max SRC)) $R) ((MNOT (0 /tmp/babel-cTZEIJ/maxima-aoIKhA.max SRC)) (($NUMBERP (0 /tmp/babel-cTZEIJ/maxima-aoIKhA.max SRC)) $R))))))) gnuplot_close() quit()
After a bit of searching, I found this code, of Leo Butler 2010. (Michael Talon telling me at Maxima's mailing list.) It should be not hard to tweak it for my purposes?
tellsimp(R.L,1+L.R); tellsimp(R^^i . L, 1+(R^^(i-1) . L) . R); declare(h,integer); declare(h,scalar); matchdeclare(m, lambda([t1],featurep(t1,integer)), n, lambda([t2],featurep(t2,integer)), u, scalarp); tellsimp(L.ket(n), sqrt(n)*ket(n-1)); tellsimp(L.(u*ket(n)), u*sqrt(n)*ket(n-1)); tellsimp(L^^m . (u*ket(n)), u*sqrt(n)*(L^^(m-1) . ket(n-1))); tellsimp(R.ket(n), sqrt(n+1)*ket(n+1)); tellsimp(R.(u*ket(n)), u*sqrt(n+1)*ket(n+1)); tellsimp(R^^m . (u*ket(n)), u*sqrt(n+1)*(R^^(m-1) . ket(n+1))); L.ket(h); L.ket(2); L.L.ket(h); L^^3 . ket(5); ket(5) + ket(5); R.ket(5) ; ket(1,2) + ket(1,2); L.ket(0); (L + R) . ket(5); telsimp(R+L, L + R); L + R; load("functs"); matchdeclare( qoperator, nonzeroandfreeof(ket) ); defmatch(qoperatorp, qoperator); matchdeclare( qvector, lambda([t3], not(freeof(ket, t3))) ); matchdeclare( p, ?mplusp ); tellsimp( (p) . qvector, first(p).qvector + rest(p).qvector); (L + R) . ket(5); (8*L + 7*R) . ket(5); sin(8*L + 7*R) . ket(5); matchdeclare( ker, lambda([t4], atom(t4) and not(numberp(t4)))); /*telsimp(testfun, ker(qoperator));*/ /* defrule(r, ker(qoperator), powerseries(ker(qoperator), qoperator, 0));*/ matchdeclare(oppower, lambda([t5], integerp(t5) and is(t5>0))); tellsimp( L^oppower, L^^oppower); tellsimp( R^oppower, R^^oppower); (R^5 . L^5).ket(5); sin(8*L + L) . ket(5); simpsum: true; tellsimp( ker(qoperator) . qvector, powerseries(ker(qoperator), qoperator, 0) . qvector) ; simpsum: false; mysum : op(sum(i^2,i,0,inf)); ldisplay(mysum); matchdeclare(sump, lambda([t6], not(atom(t6)) and is(equal(op(t6),mysum)))); defmatch(t7,sump); t7(sum(i,i,0,inf)); defmatch(t8, sump . qvector ); tellsimp( sump . qvector, apply(sum, cons(ratexpand(first(sump)) . qvector, rest(args(sump))))); simpsum: false; sin(8*L + L) . ket(5); t8( sum(L^i,i,0,inf) . ket(5) ); (L).ket(5);
Loading home/lockywolf.maxima/maxima-init.mac
read and interpret /tmp/babel-sPWpfR/maxima-NOwpwS.max
set_tex_environment_default("\\[ "," \\]")
[\[ , \]]
tellsimp(R . L,1+L . R)
[.rule1, simpnct]
tellsimp(R^^i . L,1+(R^^(i-1) . L) . R)
[.rule2, .rule1, simpnct]
declare(h,integer)
done
declare(h,scalar)
done
matchdeclare(m,lambda([t1],featurep(t1,integer)),n,
lambda([t2],featurep(t2,integer)),u,scalarp)
done
tellsimp(L . ket(n),sqrt(n)*ket(n-1))
[.rule3, .rule2, .rule1, simpnct]
tellsimp(L . (u*ket(n)),u*sqrt(n)*ket(n-1))
[.rule4, .rule3, .rule2, .rule1, simpnct]
tellsimp(L^^m . (u*ket(n)),u*sqrt(n)*L^^(m-1) . ket(n-1))
[.rule5, .rule4, .rule3, .rule2, .rule1, simpnct]
tellsimp(R . ket(n),sqrt(n+1)*ket(n+1))
[.rule6, .rule5, .rule4, .rule3, .rule2, .rule1, simpnct]
tellsimp(R . (u*ket(n)),u*sqrt(n+1)*ket(n+1))
[.rule7, .rule6, .rule5, .rule4, .rule3, .rule2, .rule1, simpnct]
tellsimp(R^^m . (u*ket(n)),u*sqrt(n+1)*R^^(m-1) . ket(n+1))
[.rule8, .rule7, .rule6, .rule5, .rule4, .rule3, .rule2, .rule1, simpnct]
L . ket(h)
ket(h - 1) sqrt(h)
L . ket(2)
sqrt(2) ket(1)
L . L . ket(h)
ket(h - 2) sqrt(h - 1) sqrt(h)
L^^3 . ket(5)
2 sqrt(3) sqrt(5) ket(2)
ket(5)+ket(5)
2 ket(5)
R . ket(5)
sqrt(6) ket(6)
ket(1,2)+ket(1,2)
2 ket(1, 2)
L . ket(0)
0
(L+R) . ket(5)
(R + L) . ket(5)
telsimp(R+L,L+R)
telsimp(R + L, R + L)
L+R
R + L
load("functs")
/usr/share/maxima/5.47.0/share/simplification/functs.mac
matchdeclare(qoperator,nonzeroandfreeof(ket))
done
defmatch(qoperatorp,qoperator)
defmatch: evaluation of atomic pattern yields: qoperator
qoperatorp
matchdeclare(qvector,lambda([t3],not freeof(ket,t3)))
done
matchdeclare(p,mplusp)
done
tellsimp(p . qvector,first(p) . qvector+rest(p) . qvector)
[.rule9, .rule8, .rule7, .rule6, .rule5, .rule4, .rule3, .rule2, .rule1,
simpnct]
(L+R) . ket(5)
sqrt(6) ket(6) + sqrt(5) ket(4)
(8*L+7*R) . ket(5)
7 sqrt(6) ket(6) + 8 sqrt(5) ket(4)
sin(8*L+7*R) . ket(5)
sin(7 R + 8 L) . ket(5)
matchdeclare(ker,lambda([t4],atom(t4) and not numberp(t4)))
done
matchdeclare(oppower,lambda([t5],integerp(t5) and is(t5 > 0)))
done
tellsimp(L^oppower,L^^oppower)
[^rule1, simpexpt]
tellsimp(R^oppower,R^^oppower)
[^rule2, ^rule1, simpexpt]
(R^5 . L^5) . ket(5)
120 ket(5)
sin(8*L+L) . ket(5)
sin(9 L) . ket(5)
simpsum:true
true
tellsimp(ker(qoperator) . qvector,
powerseries(ker(qoperator),qoperator,0) . qvector)
[.rule10, .rule9, .rule8, .rule7, .rule6, .rule5, .rule4, .rule3, .rule2,
.rule1, simpnct]
simpsum:false
false
mysum:op(sum(i^2,i,0,inf))
sum
ldisplay(mysum)
(%t44) mysum = sum
[%t44]
matchdeclare(sump,lambda([t6],not atom(t6) and is(equal(op(t6),mysum))))
done
defmatch(t7,sump)
defmatch: evaluation of atomic pattern yields: sump
t7
t7(sum(i,i,0,inf))
inf
==
\
[sump = > i]
/
==
i = 0
defmatch(t8,sump . qvector)
t8
tellsimp(sump . qvector,
apply(sum,cons(ratexpand(first(sump)) . qvector,rest(args(sump)))))
[.rule11, .rule10, .rule9, .rule8, .rule7, .rule6, .rule5, .rule4, .rule3,
.rule2, .rule1, simpnct]
simpsum:false
false
sin(8*L+L) . ket(5)
inf
==
2 i1 + 1 i1 2 i1 + 1
\ L (- 1) 9
> (----------------------–—) . ket(5)
/ (2 i1 + 1)!
==
i1 = 0
t8(sum(L^i,i,0,inf) . ket(5))
false
L . ket(5)
sqrt(5) ket(4)
gnuplot_close()
quit()
matchdeclare(m, lambda([t],featurep(t,integer)), /* power */ n1, lambda([t],featurep(t,integer)), /* vector1 */ n2, lambda([t],featurep(t,integer)), /* vector2 */ u, scalarp); /* argument */ tellsimp(a1.ket(n1,n2), sqrt(n1)*ket(n1-1,n2)); tellsimp(a1.(u*ket(n1,n2)), u*sqrt(n1)*ket(n1-1,n2)); tellsimp(a1^^m . (u*ket(n1,n2)), u*sqrt(n1)*(a1^^(m-1) . ket(n1-1,n2))); /* tellsimp(a1^^m . (u*ket(n1,n2)), a1^^(m-1). a1 . ket(n1-1,n2)); */ a1.ket(2,3); a1^^3 . ket(5,1); tellsimp(a2.ket(n1,n2), sqrt(n2)*ket(n1,n2-1)); tellsimp(a2.(u*ket(n1,n2)), u*sqrt(n2)*ket(n1,n2-1)); tellsimp(a2^^m . (u*ket(n1,n2)), u*sqrt(n2)*(a2^^(m-1) . ket(n1,n2-1))); a2.ket(2,3); a2^^3 . ket(1,5); a2.a1.(w * ket(4,8) - (3/7) * ket(11,3)); load("diag"); mat_function(sin,(a1 + a2*a1)(w * ket(4,8) - (3/7) * ket(11,3))); A : matrix([2,4],[1,2]); M : mat_function(sin,t*A);
Loading home/lockywolf.maxima/maxima-init.mac read and interpret /tmp/babel-cTZEIJ/maxima-3v9KzK.max set_tex_environment_default("\\[ "," \\]") [\[ , \]] matchdeclare(m,lambda([t],featurep(t,integer)),n1, lambda([t],featurep(t,integer)),n2, lambda([t],featurep(t,integer)),u,scalarp) done tellsimp(a1 . ket(n1,n2),sqrt(n1)*ket(n1-1,n2)) [.rule1, simpnct] tellsimp(a1 . (u*ket(n1,n2)),u*sqrt(n1)*ket(n1-1,n2)) [.rule2, .rule1, simpnct] tellsimp(a1^^m . (u*ket(n1,n2)),a1^^(m-1) . a1 . ket(n1-1,n2)) [.rule3, .rule2, .rule1, simpnct] a1 . ket(2,3) sqrt(2) ket(1, 3) a1^^3 . ket(5,1) Maxima encountered a Lisp error: Binding stack exhausted. PROCEED WITH CAUTION. Automatically continuing. To enable the Lisp debugger set debugger-hook to nil. tellsimp(a2 . ket(n1,n2),sqrt(n2)*ket(n1,n2-1)) [.rule4, .rule3, .rule2, .rule1, simpnct] tellsimp(a2 . (u*ket(n1,n2)),u*sqrt(n2)*ket(n1,n2-1)) [.rule5, .rule4, .rule3, .rule2, .rule1, simpnct] tellsimp(a2^^m . (u*ket(n1,n2)),u*sqrt(n2)*a2^^(m-1) . ket(n1,n2-1)) [.rule6, .rule5, .rule4, .rule3, .rule2, .rule1, simpnct] a2 . ket(2,3) sqrt(3) ket(2, 2) a2^^3 . ket(1,5) 2 sqrt(3) sqrt(5) ket(1, 2) a2 . a1 . (w*ket(4,8)-(3/7)*ket(11,3)) 3 ket(11, 3) a2 . a1 . (ket(4, 8) w - -------–—) 7 load("diag") /usr/share/maxima/5.47.0/share/contrib/diag.mac mat_function(sin,(a1+a2*a1)(w*ket(4,8)-(3/7)*ket(11,3))) 3 ket(11, 3) mat_function(sin, a1 a2 + a1(ket(4, 8) w - -------–—)) 7 A:matrix([2,4],[1,2]) [ 2 4 ] [ ] [ 1 2 ] M:mat_function(sin,t*A) [ sin(4 t) ] [ ---–— sin(4 t) ] [ 2 ] [ ] [ sin(4 t) sin(4 t) ] [ ---–— ---–— ] [ 4 2 ] gnuplot_close() quit()
Что-то я возился-возился, ни черта не понял. Задачка должна быть простая относительно. Но что-то я ухожу всё дальше и дальше в дебри.
Если реализовать операторы как "переменные", то получается ерунда с разложением их в ряд по степеням, ведь там нужна матричная степень, а не "обычная". Если их реализовывать как кастомные операторы, то нельзя сотворить выражение, не содержащее аргументов. mat_function отказывается раскладывать выражение в ряд, если в нём нет матриц, но с матрицами плохо, потому что кет-вектор плохо представляется в качестве одномерного вектора.
И это простая задача! Ничего не понимаю. Если в последнем случае поменять simpsum:false на simpsum:true, система впадает в бесконечный цикл, что плохо, :(. Не знаю, что на этом дальше делать.
Написать "алгебру операторов гармонического осциллятора", для приведения выражений с операторами к каноническому виду.
Тестовый пример:
-(76/3m)LRRLLR + 23LR + … +5RR
Все операторы R должны стоять слева от операторов L.
Ну здесь-то можно прямолинейно применить "алгебру гармонического осциллятора" из предыдущего упражнения?
tellsimpafter(R.L,I+L.R); tellsimpafter(R^^i . L, I+(R^^(i-1) . L) . R); tellsimpafter(R.I, R); tellsimpafter(I.R, R); tellsimpafter(L.I, L); tellsimpafter(I.L, L); a: ev((-(76/(3*m))*L.R.R.L.L.R + 23*L.R + L.L.L.R.L.R.L + 5*R.R), expand); expand(a);
Loading home/lockywolf.maxima/maxima-init.mac read and interpret /tmp/babel-sPWpfR/maxima-yHv61R.max set_tex_environment_default("\\[ "," \\]") [\[ , \]] tellsimpafter(R . L,I+L . R) [.rule1, simpnct] tellsimpafter(R^^i . L,I+(R^^(i-1) . L) . R) [.rule2, .rule1, simpnct] tellsimpafter(R . I,R) [.rule3, .rule2, .rule1, simpnct] tellsimpafter(I . R,R) [.rule4, .rule3, .rule2, .rule1, simpnct] tellsimpafter(L . I,L) [.rule5, .rule4, .rule3, .rule2, .rule1, simpnct] tellsimpafter(I . L,L) [.rule6, .rule5, .rule4, .rule3, .rule2, .rule1, simpnct] a:ev(-(76/(3*m))*L . R . R . L . L . R+23*L . R+L . L . L . R . L . R . L +5*R . R,expand) <2> <2> 76 (L . R . L . R) <2> <4>
expand(a) <2> <2> 76 (L . R . L . R) <2> <4>
gnuplot_close() quit()
Опять чудеса. Ладно, спросим в мейлинг листе Максимы, что вообще происходит.
В общем, пока я не прочитаю хотя бы введение в ОТО, маловероятно, что я решу эту задачу. На выбор есть ещё четыре задачи, у которых я не знаю всех терминов, входящих в условия.
В общем, проект зафейлился. Может, лет через 5 я выучу достаточно, чтобы понимать, о чём речь, и вернусь к нему.
说实在的关于Sven Vermeulen的“SELinux系统管理”(SELinux System Administration)我会写什么样的短评?
我不能写作详细的评论,因为我没有从头到尾阅读整本书。
我不能写作内容提要,因为这本书涉及的领域很宽,提到巨多不同的话题。
我甚至都不会写清晰的HOWTO“如何使用SELinux,为了升级自己服务器的安全性”,因为,一则本任务太严重的依赖服务器的结构本身,其次因为这本书不包含所需要的信息。
这本书到底是关于什么的内容?
我上次写短评遇到这种困难的时候是三年前。 那时我在尝试写作我对“INCOSE Systems Engineering Handbook”的观点。
一方面本书不包含坑爹内容,包括所有需要的,但是另一方面很难解释“它到底是关于什么”。
可是,这一次我的问题不是在于这本书本身,而是在于它要讲解的话题。
SELinux是很复杂的客体。 并且复杂在细节、容量方面,而不在算法复杂度方面。 SELinux有很多细节、特性,因为它被开发很久了。 一部分特性是为了补偿原来的模型的缺点而存在,一部分是为了让它跟其它的Linux分系统同步而存在,一部分本来就是不顺利的设计。
也可以说此文本任务 – 不完全是为一本书的观点,而更多的是我尝试讲解“刚刚足够避免因系统坏了而纳闷的知识”, 并不是尝试评估书的内容品质。
大部分SELinux说明书从“SELinux是Linux内核安全性模块之一”开始。 这里“模块”不是“可以在内核加载客体文件”,而是“实现某些接口的代码,跟某些其他代码可以互换”。 从某一方面来说这个是实话,因为除了SELinux以外,还有其他的“安全性模块”,比如AppArmor.
反之,只有内核代码不能完全实现整个SELinux,还需要用户等代码。 而且除了最明显的的调试SELinux的方法(通过GNU Coreutils)以外,PAM,systemd,dbus,ipsec等工具都有某些SELinux支持。
什么鬼?这是什么样的怪物?
可是SELinux确实是这样,从头到尾贯穿整个系统。 因为如果不是这样,就不能构造无所不包的安全系统。
这到底是什么?
大概的说,SELinux是专门被发明的规范化系统,被用来使用一种很详细的语言描述“谁被允许做什么”。
这个定义也不是很容易理解吗?
换句话,Linux内核有一个“窄处”。 在该“窄处”,“某事”,被标为“某标签”,在“某客体”上,执行“某操作”。 同时Linux内核会判断是否允许这个操作。
还是很抽象吗?
我举个例子。
大部分时间“某事”是一个进程。 这里还有一个微妙的事情,因为我们一般情况下要分辨系统进程和用户进程,虽然理论上它们没有区别,都是程序。 例如在这种情况下,我们至少会遇到“进程标签”(“系统的”还是“用户的”)。 但结果是,SELinux开发者,花费很长时间开发之后,却让管理员遇到:“用户标签”,“角色标签”,“进程标签”。 但是,不要认为这三个标签有固定的定义,比如不要认为“用户标签”有相符的系统用户名,或“进程标签”有相符的系统服务名,虽然在RedHat系统上确实是这样。
典型的“客体”是一个文件。 它也具有三个标签,但是对于文件来说,这标签的意义更是令人困惑。 在很多案例中,只有第三个标签“上下文”有某些意义。 标签文件是在文件系统"扩展属性"中存储的, 但并不完全是这样。
不幸的是,这个“并不完全是这样”贯穿整个SELinux分系统。 事实上,标签来自专门设置的文件夹,/etc/selinux,并且还有特殊的命令,叫做“restorecon”,把在该文件夹定义的标签分配在全部文件上。
为什么需要这么做? 难道是由于速度的原因? 或者为了让用户改换文件标签,但不允许它改变/etc/selinux内的文件?
其他的客体类型会是…任何物! 比如,tcp-ip端口、ipsec网络包、dbus服务。 在最后一个例子,内核不会参加SELinux验证,但特殊代码库会。
然而,连内核和特殊代码库都应该有某些方法理解什么被允许和什么被禁止。 (在SELinux默认设置中,所有的事情都是被禁止的,所以很容易搞错,禁止自己管理系统。) 每个激活SELinux的系统一般都有/etc/selinux目录,含有很多政策文件,在此文件某些“政策写者”应该使用专门的语言很详细的,一步一步,一文件一文件,一用户一用户,说明准确的标签互动。
天真的年轻人,我开始阅读Sven Vermeulen的书的时候,我以为,读完一章导语之后,我大部分时间会用来写作几个“教练政策”。
并非如此。讲解写作政策的话,这个Sven Vermeulen还有第二本书籍—SELinux Cookbook—几乎完全针对写作政策。 虽然我在这本“短评”中说“专门的语言”,但是事实上这本书还提到了其他两个不同的政策语言。 非常疲劳的过程,该过程唯一的好处是这三个语言之一与Lisp相似。
事实上没有人自己写这种政策,除了很少的例子以外。 大部分政策通过两个方法来到系统:要么RedHat工程师已经为你提供政策包,那只需要安装该政策包,(并且查看政策代码不是那么简单!政策包是二进制文件!),要么你会使用魔法命令“audit2allow”, 基于最近的违反安全政策的案例,它会为你生成允许性政策。 这个政策既可能不准,又可能效率低,但是它会帮助你“立刻启动”你的服务。
一般来说,还有seinfo和sesearch命令,它们能检查存在的政策规定,但是我认为,调试SELinux政策非常的难。
总之, 如果大部分系统管理员不写作政策代码,并且这本书只包括一个关于政策的章节,那么这本书到底是关于什么?
它讲解了很多在使用SELinux可以遇到的细节。
比如“selinux boolean”。 这是一些(在政策里被定义的)变量,按照该变量,政策会改变自己的行为。 它们有什么意义? 为什么不可以直接写另外一个政策呢? 表面上,是因为默认的政策是一个二进制模块,而且普通的系统管理员不会写政策。
此外,这本书还讲解了如何全部关闭或者打开SELinux的分系统,或者如何激活permissive模式。 该模式下SELinux会记录违反案例,但是不会阻止。
讨论文件上下文的话,matchpathcon、chcon、runcon、让用户监视,检查和验证路径标签等命令,被讲解的比较好。
某些分系统帮助内核在不同的客体上设定准确的上下文,其中有systemd, dbus, PAM都是Linux内核没办法直接触碰到的同等分系统。 这本书也比较详细的描述了此互动。
整一章描述SELinux和Docker或libvirt的合作。 最初看来,这似乎是冗余的,因为毕竟docker和libvirt中包括依靠系统层隔离的安全性用法,即使是为了避免太详细的设定安全情况。 不过,知道此功能存在也有用。
有一个事情,这本书基本不覆盖。 是跟安卓相关的SELinux用法。 基于显性的判断,安卓是第二个SELinux用例,但是基于设备数可能是第一个。 不过,这本书关于安卓的内容介绍非常的少,因此这个知识需要在其它书中查阅。
总结,根据我的猜测,为了完全理解SELinux,需要至少阅读5本书:
很多?确实很多。
最后,我要总结本“准”短评,我应该说,我没有找到在小范围内就可以开始使用SELinux的方法。 也许,作为幼儿园的案例,你可以隔离化nginx,把它安装在标准的Redhat系统上,然后打开默认的政策。 从而在自己的简历上加一行“有SELinux的经验”。 但是我不会把这个算作良好的成就。
The origin of this research is here: https://new.qq.com/rain/a/20200901A0OB4100
This is my exercise in translation from Chinese into English.
网上流传着这么一个有趣的观点,罗马帝国现 在的正统在河北省张家口市。这样的说法是怎 么来的呢? |
There is a certain viewpoint spreading recently on the net, that the True Roman Imperial Heritage is now in the city of ZhangJiaKou, in the Chinese province of Hebei. Where has this viewpoint come from? |
众所周知罗马帝国作为西方文明的重要组成部 分,早在1453年就亡于奥斯曼土耳其帝国,为 什么会和远在东方的河北张家口联系上呢?让 我们来看看在网友们神奇的脑洞里,这远隔千 里的二者是如何联系在一起的呢? |
As everyone knows, the Roman Empire has made an important contribution to the Western Culture. As early as 1453 it was destroyed by the Ottoman Turkish Empire. So how can it be connected to ZhangJiaKou, Hebei, which a long way to the East. Let us have a look at the netizens' mystical mind chamber, to understand how those two infinitely far places can be connected. |
这个观点的源头,来自于这么一段文字: |
This viewpoint is coming from the following piece of text: |
罗马正统在君堡 | Roman Heritage in the Lord's City |
君堡正统俄国继 | Lord's City by Orthodox Rus succeeded |
俄国正统在鞑靼 | Russian Heritage in the Tatar |
鞑靼正统在蒙古 | Tatar Heritage in Mongolia |
蒙古正统察哈尔 | Mongol Heritage in Chahar |
察哈尔省会张家口 | Chahar's capital is ZhangJiaKou |
这种网友们的论证,看起来似乎层层推进,逻 辑严谨,毫无指摘之处。那么到底有没有道理 呢?让我们顺着其中逻辑关系,盘一盘其中门 道吧。 |
The reasoning of those netizens, seemingly, advances in a layer after layer manner, strictly logical, with no step omitted. After all, is there truth in it? Let us follow those logical connections, move after move. |
罗马正统在君堡 | Roman Heritage in Lord's City |
罗马帝国兴起于罗马城,据传古城的建立者罗 慕路斯兄弟是战神马尔斯的儿子,二人小时候 由一头母狼的乳汁喂养长大的,所以至今罗马 城内还有着母狼哺育两个儿童的雕像。 |
The Roman Empire arises from the city of Rome. The rumor says that that the founders of the ancient city, Romulus brothers, were the sons of Mars, the God of War; both when they were small, were fed she-wolf milk, until they grew, and even today, in Rome there is a sculpture of a she-wolf feeding two babies. |
罗马共和国时期征服了地中海大部,建立起了 一个以地中海为内海的伟大国家。 |
Rome, in the Republican period, conquered most of the Mediterranian, established a great country with the Mediterranean sea as it's enclosed sea. |
公元前27年元老院授予恺撒的继承人屋大维“ 奥古斯都”的尊号,屋大维成为罗马帝国的第 一位皇帝,罗马从此进入了帝国时代。 |
27 BC, The Senate awarded Caesar's successor Octavian the title of "August", and Octavian became the first Roman Emperor, and Rome entered its empirial period. |
公元395年,罗马帝国分裂为东西两部分,但 是西罗马帝国在476年被蛮族日耳曼人攻灭, 废黜西罗马皇帝的日耳曼人宣誓效忠于东罗马 。 |
In 395 AD, Roman Empire split into the Eastern and Western parts, but Western Roman Empire in 476 AD was defeated by Germanic Barbarians, and the Germanic tribes' deposed the West Roman Emperor and took the oath of fealty to the East Rome. |
于是罗马帝国只剩下了东罗马帝国(又称拜占 庭帝国)一个火种。 |
Thereupon the only Roman Empire left was Eastern Rome Empire (also called Bysantine Empire), the only heir. |
而东罗马帝国的都城在君士坦丁堡(今土耳其 伊斯坦布尔),这里便是罗马正统在君堡的来 历了。 |
Moreover, the Eastern Roman Empire's capital Constantinople (present day Turkish Istanbul) is that "Lord's City" origin. |
西罗马帝国灭亡后,东罗马帝国成为了罗马帝 国实际意义上的继承者。 |
After the destruction of the Western Roman Empire, the Eastern Roman Empire became the true ideological successor to the Roman Empire. |
只可惜14世纪奥斯曼土耳其崛起,于1453年攻 陷君士坦丁堡,东罗马帝国灭亡。 |
Unfortunately, in the 14th centure, Ottoman Turks sudenly appeared on the horizon, and in 1453 they assault Constantinople, and Eastern Roman Empire is no more. |
奥斯曼土耳其攻灭东罗马后,宣称自己是东罗 马帝国的继承者,然而大家都知道,奥斯曼帝 国是信仰伊斯兰教的土耳其人,从宗教、文化 及种族方面,跟东罗马帝国没半毛钱关系。 |
After the Ottoman Turks have extinguished the Eastern Rome, they proclaimed tthemselves to be the successors to Rome, however, everyone knows that the Ottoman Empire consists of believing in Islam Turks, so from the cultural and ethnic perspective, has nothing in common with Rome. |
况且他们还宣称自己是突厥帝国的继承人呢, 这种说法,自然是无法令人信服的。 |
Moreover, they also claim to be the successors of the Turkic Empire, so naturally, nobody would belive them. |
那为什么俄国自称是东罗马的继承呢? |
So why is Russia the successor of East Rome? |
原来1473年莫斯科大公伊凡三世娶了拜占廷帝 国的索菲娅·帕列奥罗格公主,所以俄国宣称 自己继承了东罗马正统。 |
Originally, in 1473, the Grand Duke of Moscow, Ivan the Third, married Sofia Paleologos, the Bysantine princess. Because of his, Russia claimed itself to be the successor to the East Roman heritage. |
欧洲各国的法理继承和中国不同,只要有血缘 关系,都是有继承权的,所以欧洲各国有很多 女儿,甚至外甥继承王位的例子。 |
In Europe, the inheritance law of every country is dissimilar to that of China, and only requires blood relationship to have some rights to inherit. Therefore, in Europe, daughters, and even maternal nephews sometimes get to inherit the crown. |
和索菲娅公主一起来到莫斯科的,还有东罗马 帝国的双头鹰标志,以及东罗马国教东正教。 |
Together with Sofia, to Moscow arrived the Two-Headed Eagle, the symbol of the Eastern Roman Empire, and the Roman Faith, the Orthodox Faith. |
双头鹰标志至今还存在于俄罗斯国徽上;而俄 罗斯目前仍是全球东正教的中心。 |
The Two-Headed Eagle even today is on the State Coat of Arms of Russia. And Russia still is the world center of the Orthodox Faith. |
1547年东正教大主教为莫斯科大公伊凡四世加 冕为帝,称沙皇。 |
In 1547 the Orthodox Patriarch crowned the Grand Duke of Moscow Ivan the Fourth as the Emperor, also called Tsar. |
在东罗马帝国时期,东罗马帝国皇帝被称为恺 撒,转化成俄语,便是沙皇。 |
In the times of the East Roman Empire, the Emperor was called Caesar, which, when pronounced in Russian, sounds like Tsar. |
沙俄此举就是表明自己是东罗马的正统继承者 ,甚至还以“第三罗马”自居。 |
This way Tsarist Russia expressed its own inheritance and succession of the Eastern Rome, and even called itself "The Third Rome". |
所以,君堡正统俄国继,似乎也没有什么不妥 的。 |
Therefore, there is also almost nothing implausible in the stance "Lord's City by Orthodox Rus succeeded" |
俄国正统在鞑靼,鞑靼正统在蒙古 |
"Russia was succeeded by the Tatar, and Tatar was succeeded by Mongol" |
西方流传着这么一句谚语,剥开一个俄罗斯人 ,你会发现一个鞑靼人。 |
Western speakers interpret this saying like this: peel the skin off a Russian man and you will find a Tatar. |
鞑靼人在西方是蒙古人及蒙古人与当地人融合 后的后裔的统称。 |
In the West, "Tatars" is a name given to the decendants of mongols, as well as other peoples living in the same land. |
那么为什么俄罗斯人剥开后会是蒙古人呢? |
Then why would peeling the skin off a Russian give you a mongol? |
这就要从历史上面积最大的帝国——蒙古帝国说 起了。 |
Let us start from the fact that historically, Mongolian Empire had the biggers area ever achieved. |
成吉思汗有四个儿子,他们的后裔后来因为蒙 古大汗之位发生争执,蒙古帝国分裂出四大帝 国,分别是金帐汗国(又称钦察汗国),察合 台汗国、窝阔台汗国、伊利汗国(又称伊尔汗 国)。 |
Zhenghis Khan had four children, and when the time to inherit the Grand Khan title came, a conflict arose, and Mongolian Empire was divided into four empires, called the "Golden Horde" ( Also called the Kipchak Khanate), the Chagatai Khanate, the House of Ögedei, and the Ilkhanate (also called Iranzamin). |
四个汗国都奉入主中原的元朝为主,但是各自 之间矛盾不断,常有战争。 |
All the four khanates recognized the Chinese Yuan dynasty's seniority, but among themselves they incessantly conflicted, and often warred. |
此时的俄罗斯地区,便是在金帐汗国统治之下 。 |
This time Russian territory was under in Golden Horde's sphere of domination. |
但是蒙古人口有限,很多事务都交给当地贵族 管理。 |
But since the manpower of the Golden Horde was limited, many government affairs were oursourced to the local nobility. |
俄罗斯帝国的先祖,绰号钱包的莫斯科大公伊 凡一世在金帐汗国大汗的支持下,获得了全俄 罗斯地区的收税权。 |
The Russian founding father, nicknamed Kalita, was Ivan the First, Duke of Moscow, according to the order of the Great Khan, obtained the privilege to levy taxes from all of Russia. |
依靠收税权这份肥差,莫斯科大公势力迅速壮 大,并在大汗的支持下,成为了俄罗斯地区的 实际统治者。 |
Obtaining this very lucrative position, the Grand Duke of Moscow swiftly started accumulating strength, and with the Grand Knan's support, became the unifier and the real ruler of all of Russian lands. |
14世纪末期,金帐汗国逐渐衰落。 |
By the end of the 14th century, gradually, the Golden Horde was steadily declining. |
1480年,莫斯科公国最终击败大汗,从蒙古人 的统治下独立出来。 |
In 1480, the Dutchy of Moscow finally defeated the Grand Khan, and obtained independence from the Mongolian domination. |
从1240年到1480年,蒙古人对俄罗斯地区进行 了长达240年的统治。 |
From 1240 to 1480, for the whoopping 240 years, the Mongols dominated over the Russian Land. |
大量俄罗斯上层贵族同蒙古贵族通婚,据统计 ,当年在俄国具备蒙古血统的大公足足有九十 二个之多,他们产生了三百多个贵族姓氏。 |
Many high-ranking Russian nobles married Mongolian nobles, according to the statistics, during that period, as many as 92 Grand Dukes were related to the Mongols via blood ties, and as many as 300 family names in Rissia are due to those family links. |
而蒙古人以及随蒙古人出征来到俄罗斯地区的 各族人和当地人融合的后裔,也孕育出鞑靼等 多个俄罗斯民族。 |
Moreover, Mongols, as well as many non-Mongols fighting on their side, came to the Russian Lands and left offsprings with the natives, thereby giving birth to the tatars, as well as many other ethnicities inhabiting Russia. |
苏联革命导师列宁的奶奶是卡尔梅克人,和当 年东归的土尔扈特部蒙古,是同一支蒙古部落 。 |
The leader of the Soviet Revolution, Lenin's grandma (from his mother's side), was a Kalmyk, which means that he belong to the same branch of the Mongolian tribes, as the "Tu Er Hu" tribe, which migrated to the East the same year. |
而苏联领导人斯大林的母亲也同样有着蒙古血 统。 |
Moreover, Stalin, the Soviet Union leader, had a mother, who was also, related by blood to the Mongols. |
据说每七个俄罗斯人,就有一个人有蒙古人的 血统。 |
They say that as many as one in seven Russians has blood links to the Mongols. |
所以我们会看到很多苏联领导人的样貌,都多 少有一些东方人的特征 |
Therefore, considering Soviet leadership looks, we can see that all have at least a little bit of Eastern facial features. |
俄罗斯的建立以及统治制度,都深受蒙古金帐 汗国的影响。 |
Russian establishment, as well at the political system, in no small measure were created under the influence of the Mongolian Golden Horde Khanate. |
但是经过百年的延续,目前俄罗斯主体还是以 斯拉夫人为主,蒙古血统虽然存在,但是影响 力和存在感,是十分微弱的。 |
However, after hundreds of years have passed, presently Russia is still mainly a Slavic country, and while the Mongolian blood is still present, its influence must have weaked about tenfold. |
就好像我们汉族其实也融合了大量匈奴鲜卑突 厥契丹等民族的血脉,但是不能说我们就是匈 奴鲜卑等民族的后代吧。 |
Just like us, the Han people, as in fact we also have a lot of foreign blood floating in our veins, including the Huns, the Xianbei, the Turks, the Khitans. Still, nobody is claiming that we are "just" the descendants of the Huns, the Xianbei, and the like. |
所以说俄国正统在鞑靼,鞑靼正统在蒙古,在 一定意义上是说得通的,但是牵强附会的意义 更大。 |
Therefore, saying that the Russian heritage is in the Tatars, the Tatar heritage is in the Mongols, in some sense is saying the same thing, although, maybe that's just a little bit more of a stretch. |
那蒙古又是如何和察哈尔省会张家口联系上的 呢? |
But what is the connection between Mongolia and ZhangJiaKou, the Chahar's provincial capital? |
察哈尔省是民国时期的一个省,省会在今天的 张家口市桥西区,下辖今天内蒙古中部东部, 河北及山西省北部地区。 |
The province of Chahar existed during the times of the Republic of China, with its provincial capital located at the present QiaoXi district of the ZhangJiaKou city, and administered the area corresponding to the present Inner Mongolia's center and east, the present Hebei province, and the present northern Shanxi. |
而察哈尔省名字则是源于清朝时期的蒙古察哈 尔部。 |
Moreover, the name Chahar comes from the Qing Dynasty time prefecture in Mongolia, called Chahar. |
察哈尔一词源于波斯语,其意为“家人”、“奴 仆”、“卫士”、“宫殿卫队”之意。 |
The word "Chahar" itself comes from the Persian language, and its meaning is something like "family members", "servants", "bodyguards", "palace guards". |
成吉思汗生前把以“察哈尔”命名的自己一部分 家人和贴身仆人赐给幼子拖雷之妻。 |
During the life of Dzhenghis Khan, the name "Chahar" was used by some of the family members and close servants, attributed to the wife of the infant Tolui, the fourth son of Dzhenghis Khan. (was the wife infant, or Tolui himself?) |
北元时期,察哈尔成为了就是蒙古大汗的直属 部落的名字。 |
During the Northern Yuan period, "Chahar" became just the name of the tribe directly subservient to Grand Khan. |
从此,察哈尔部就和成吉思汗直系后裔黄金家 族一样,成为了蒙古宗主的代表。 |
Since then, "Chahar" was associated with the direct descendants of the Dzhanghis Knah, sometimes called the "Golden Klan", who were the rulers of Mongolia. |
1634年,察哈尔部首领,蒙古末代大汗林丹汗 死于青海。 |
In 1634, the head of Chahar, the last Mongolian Grand Khan, Ligden Khan, died in Qinghai. |
其子额哲于次年投降满洲后金,皇太极封其为 察哈尔亲王,将次女固伦温庄长公主马喀塔嫁 给了他。 |
Next year his son Ejei capitulated before the Machu dynasty of Later Jin, and the Manchu Khan Hong TaiJi gave him the title of the "Prince of Chahar", and made his own daughter, Makata, titled Princess WenZhuang, his wife. |
1636年,蒙古各部会聚沈阳,承认皇太极为汗 。 |
In 1636, all-Mongolian council convened in Shenyang, and regognized Hong Taiji as the next Khan. |
从此,清朝皇帝就兼任了蒙古大汗。 |
Since then, the Qing Dynasty Emperors became at the same time the Mongolian Khan. |
康熙时期,察哈尔部被清廷迁至后来的察哈尔 省地区安置。 |
During the period of Emperor Kangxi of the Qing dynasty,the Court of Chahar was moved to a more suitable place, which happended to be at the territory of the later Chahar province. |
察哈尔省便是来源于此。 |
This is exactly where the name Chahar comes from. |
如此看来,所谓的蒙古正统察哈尔,察哈尔省 会张家口,倒是挺有道理的。 |
Looking at the story from this angle, the so-called "Mongolian Heritage is in Chahar", "Chahar's capital is ZhangJiaKou" finally starts making sense. |
整个逻辑链条捋下来,前后两端都是毫无破绽 的,只是中间俄罗斯和蒙古的所谓正统,太过 于牵强了。 |
Walking along all the chain of logical inferences, from one end to another, it doesn't seem possible to find a single weak link, except that so-called Russian to Mongolian inheritance, which seems too much of a stretch. |
所以,以后大家听到如此的戏闻,只当是玩笑 ,切莫当真了。 |
Therefore, those who managed to read this anecdote this far, and pretend to be not smiling, it's your turn now. |
当然历史方面有很多类似的戏谈,粗略看起来 ,挺有道理,但是细究起来,很多逻辑破绽。 |
Of course, there are many anecdotes about history, which, judging inaccurately, may seem to make a lot of sense, but, when scrutinised carefully, most of the logic fails apart. |
很多时候,大家不必当真。 | Most of the time, we need to be wary. |
当然,也大可不必因为这些戏闻痛心疾首,毕 竟如果通过对这些戏谈的研究,能够了解更多 的历史知识和趣闻,何乐而不为呢? |
Of course, it is not necessary to make your heart burn because of a historical anecdote, after all, if an anecdate made you do some historical research, understand history some more, and get some more knowledge (even if of dubious utility), isn't it worth it, after all? |
本文将为大家介绍笔者关于Richard Stevens的《UNIX网络编程》这本书的观点。 UNP(UNIX Network Programming)是全世界最经典的关于网络编程的书籍。
在这个领域还有以下经典书目:
不骗你们,经典网络编程书目包括两本Richard Stevens的书,占到了2/3的比例。 这是怎么一回事呢?
事实上,Stevens就是真正的领袖,本领域的创始人之一。 UNP也可以说是显然经受住了时间的考验,被重印了三次。
我想借这篇文章概述这本书给我的影响。
为什么我要开始学网络编程? 与很多事情不同,我是由于工作需要才开始学这项技能。 我从来没有要仔细理解socket和TCP-IP的细节。 我很早之前学过CURL,我以为那就足够了,因为我以为互联网会一年比一年更好更稳定。
实际上,互联网一年比一年更差。 我年轻的时候的态度“只要用默认设置,就可以了”已经完全不适合今天的情况了。 所以即使我是其他专业的,但是应该知晓互联网原理和调整方法。
首先,我希望避免去学习过于底层的逻辑。但是,在调试使中,会使用到很多高层工具, 我经常发现——它们要么在假设上出错,要么还是需要用户理解网的络底层逻辑。
有一次,简单地说,我发现,为了debug自己的网络连接不良问题, 我就必须详细理解网络、协议、数据包结构和用户面加密等知识。
Stevens的书是关于UNIX套接字(socket)编程接口讲解。 UNIX本来是最早支持网路的操作系统之一。 实际上,甚至连Windows也使用BSD的套接字接口。
从这方面来说,Linux把它的网络接口完全从老UNIX接过来了。 虽然今天基本上没有任何人还在使用UNIX,但大部分操作系统对其API不敢做任何改动。
所以学会理解SocketAPI对我们来说应该是最广泛适用的技能了。
除了SocketAPI以外,Linux还有自己的国产API,叫做Netlink,但是我尚未使用过它。
TCP/IP诞生的时候还是80年代,那时的网络和现在很不一样。 所以TCP/IP包括很多那时工程预计会很好用的功能,现在已经不那么适应时代了。 比如说,完整的互联网协议套件包含15以上的第二层协议,但是只要4个涵盖99%的用例。
时代进步不会停止。悲观者会说:“如果从协议套件里只使用两个协议,这等于失败,因此是差得设计”。 但是我反而会说,“如果你的协议能持续被使用40多年, 尤其是40年前制造的设备仍然可以继续操作,那么你的设计其实很好。”
大纲 这本书涉及的内容非常详细。除了基本的TCP和UDP以外, 还包括原始(RAW)套接字,虚拟专用网络(VPN),路由和其它类型套接字。
其实,本书原来的版本包括两个册。第一册远程操作,第二册包括本地操作,比如 UNIX-domain套接字和SUNRPC。
本书还包括域名识别系统(DNS)相关的内容。
阅读这本书之后…不能说你会成为本领域最好的专家, 但是你肯定会超过比80%的程序员。
多层次设计 作为本领域最好的教材之一,本书追求内容上多层次设计。 我可以至少将它分为数3个层次。
第二层次中有关于TCP,UDP和SCTP的几章,给阅读者介绍了几个交流协议的区别,操作依据和常见的用法。 对大部分开发者来说第二层次应该能满足做大部分工作的要求…如果开发者写纯C的话。
在这阶段,书中的内容更为为复杂。其实第三层次不包含太多复杂的功能,但是它还是继续使用纯C。 纯C语言很贴近硬件,但是并不对用户十分友好。 其实第三层次对开发者来说应该会更好用,但是由于TCP/IP的设计者必须把比较复杂的结构放在比较底层的C语言中, 因此第三层的内容确实比较难理解和实践。
在阅读第三层次的几个章节时,我不禁开始思考C++相对于纯C的优势。 其实,套接字API从我的角度来说,比较适合用C++绑定包装。
不过,本书中涉及更复杂内容的几个章节写得的比较准确,如果严格跟着本书的指引做的话, 可以写出较为可靠的代码。
个三个层次之外,Stevens的书里还有几个辅助性的章节。 其中一章详细介绍了域名系统,其它章节则描述了原始套接字(RAW SOCKET),VPN,其它话题。
我没有阅读所有的辅助性章节,但是如果我有需要,我能够知道在哪里找到相关内容。
UNIX 这本书的第二册(???!!)没有那么多版本。比如最新的版本就不包括第二册(???!!!)。 第二册大多内容涉及虑本地性,以及操作系统级别进程间的通信,比如UNIX套节字。 关于这个话题,教材中有很多内容。 甚至Stevens本人写了“Advanced Programming in UNIX Environment”. 如果我的粉丝们好奇的话,我可以推荐“Mark J Rockind”的"Advanced Unix Programming"给你.
但是,有一个主题我感觉在UNP这本书里是讲得最好的: 这个是SUNRPC。SUN公司遥控执行步骤,或者称为"远端程序呼叫"。 (Remote Procedure Call).
SUNRPC 虽然UNIX中多处使用了SUNRPC,尤其是在NFS(网络文件系统)等,但是官方的资料比较薄弱。 Stevens在书中对此的介绍也比较短,但是很容易理解,逻辑描述得很清楚。
安全 我觉得TCP/IP最大的缺点是IPsec的协议套件。 确实,如今我们已经有了一定的经验,明白IPsec并不是最适合实际情况的加密方法。 Stevens在书中用(化??!!)一章了描述IPsec的密钥交换方案, 尽管(?)还有若干更方便的密钥加交换流方案。
练习题 :本书附上大量练习题,每一章都有若干,但是我没有完成这些题, 因为我开始阅读的时候已经有了自己的问题和计划。
大部分我需要掌握的知识,都能从UNP学到(除了加密以外)。
把这本书读完之后,我成功地写了一个内核修补,一个网络程序, 然后计划了如何改善3个其它的(非紧急)项目。
总体来说,Stevens的书,虽然有点干货,有点过时,但是可以给阅读者它们所需要的基础知识。
如果您在此博客或其他页面中发现任何对您有用的内容,请订阅并打赏。 请您转发、分享和讨论,您的反馈可以帮助我变得更好。
Что я могу, собственно, написать про книгу Свена Вермёлена "Системное Администрирование SELinux"?
Я не могу написать тщательного обзора, потому что я не прочитал сию книгу от корки до корки.
Я не могу написать краткого содержания, потому что сама по себе книга очень обширная, и включает огромное количество разных тем.
Я не могу написать даже внятного хауту "как использовать SELinux, чтобы обезопасить собственный сервер", потому что во-первых, это слишком уж сильно зависит от того, как ваш сервер устроен, и во-вторых, потому что в книге недостаточно для этого информации.
О чём же вообще эта книга?
В последний раз подобные трудности в написании обзора на книгу возникли у меня года три назад, когда я пытался написать ревью на INCOSE Systems Engineer Handbook.
С одной стороны, книга, вроде как, не содержит лажи, включает всё нужное, но при этом очень трудно рассказать "о чём вообще она".
Однако, в данном случае, как кажется, проблема состоит не в книге как таковой, а в материале.
SELinux – сложная штука. Причём, сложная именно в смысле изощрённости, объёмности, а не в смысле алгоритмической трудности. У неё крайне много деталей и странностей, которые были добавленны в неё разработчиками за много лет существования самой сущности. Часть этих сложностей была добавлена для того, чтобы скомпенсировать недостатки базовой модели, часть для того, чтобы синхронизировать модель с другими подсистемами Linux-системы, а часть – просто изначально неудачный дизайн.
В каком-то смысле, этот текст – не столько обзор на книгу, сколько моя попытка изложить "количество SELinux, достаточное для того, чтоб не недоумевать что ничего не работает", а не попытка по-настоящему оценить качество книги как мануала.
Большинство мануалов по SELinux начинаются с того, что рассказывают, что SELinux – это "модуль безопасности" в ядре Linux. В данном случае слово "модуль" не стоит понимать как "загружаемый в ядро объектный файл", а как "код, реализующий некоторый интерфейс, взаимозаменяемый с другим кодом". Это будет в некотором смысле правдой, потому что кроме SELinux существуют также и другие модули безопасности, например, AppArmor.
С другой стороны, ядерный код реализует SELinux не полностью, требуется ещё поддержка со стороны пользовательского кода. Более того, кроме очевидного интерфейса для настройки SELinux через расширения GNU Coreutils, существует ещё поддержка SELinux в PAM, systemd, dbus и ipsec.
Вот это да! Да что же это за монстр-то такой получается?
Однако, так и есть, SELinux пронизывает всю систему насквозь, ведь, кажется, без этого невозможно построить никакой по-настоящему всеобъемлющей системы безопасности?
Так что же это всё-таки такое?
Грубо говоря, это система спецификации "кому что разрешено", на очень детальном и подробном языке описания.
Тоже непонятно звучит, так ведь?
Грубо говоря, в ядре Linux есть некоторое "узкое место", на котором "кто-то", помеченный некоторыми метками, делает "что-то" над "чем-то", также помеченным метками. Далее ядро решает, следует ли разрешить, или запретить данную операцию.
Звучит крайне абстрактно?
Приведём пример.
"Кто" – это, в подавляющем большинстве случаев "процесс". И тут тоже возникает тонкость, потому что зачастую хочется разделять системные процессы и пользовательские процессы, хотя фундаментально между ними нет никакой разницы, ну программы и программы. Вот тут, например, может как раз и возникнуть "метка процесса" "пользовательский" или "системный".
В итоге разработчики SELinux писали-писали, да в итоге добавили не просто одну метку, а целых три – "пользователь", "роль", и "метка процесса". Совершенно не следует думать, что значения этих меток жёстко определены, и что пользователь и в самом деле соответствует системному пользователю, а метка процесса соответствует системному сервису, хотя в RedHat как раз примерно так и предусмотренно.
Типичный "объект" – это файл. У него тоже может быть три метки, однако, в случае файлов их значение ещё более туманно. Зачастую используется только третья метка "метка объекта". Для файлов, метки хранятся в "расширенных атрибутах" системы… но не совсем.
К сожалению, это "не совсем" пронизывает всю SELinux-подсистему сверху донизу. На самом деле, метки хранятся в специальном подкаталоге каталога /etc/selinux, и существует специальная команда restorecon, предназначенная разложить эти метки по расширенным атрибутам.
Зачем так нужно? Ну, видимо, для скорости? И для того, чтобы пользователю можно было разрешить менять метки файлов, не разрешая писать в каталог внутри /etc? Другими объектами может являться … что угодно! Например, порты tcp-ip, или пакеты ipsec, или сервисы dbus. В последнем случае верификацией операций вообще будет заниматься не ядро, а специальная библиотека.
Тем не менее, и ядро, и специальная библиотека как-то должны понимать, что и когда разрешать. (В SELinux по-умолчанию всё запрещено, поэтому налажать, и запретить самому себе администрировать систему крайне легко.) Для этого существует целый каталог /etc/selinux, в котором крайне подробно, с множеством мелких подробностей, шаг за шагом, файл за файлом, операция за операцией, пользователь за пользователем, некоторый "автор политики" должен описать правильное взаимодействие меток.
"Наивный юноша", когда я начинал читать книгу Свена Вермёлена, я думал, что после одной главы введения, я буду большую часть времени как раз и писать всякие "учебные политики".
Ничего подобного. Для написания политик у того же самого Свена есть вторая книга (SELinux Cookbook), практически полностью посвящённая написанию политик. Но даже в книге, которую я "якобы обозреваю", указано, что существует ажно целых три языка, на которых эти политики пишутся. Крайне трудоёмкий процесс, в которым единственным светлым пятном является то, что один из этих языков похож немного на Lisp.
В реальности эти политики никто не пишет сам, за редчайшими исключениями. Большая часть политик появляется в системе двумя путями – либо инженеры RedHat уже написали их за вас, просто устанавливаете пакет (причём просто так посмотреть политику, прочитав код нельзя! политики поставляются в двоичных файлах!), либо вы используете магическую команду audit2allow, которая сгенерит политику, основанную на недавних запретах, для вас. Эта политика будет плоха и неэффективна, но по райней мере вы сможете заставить нужный вам процесс работать "прямо сейчас".
Вообще говоря, существуют команды seinfo и sesearch, которые позволяют проверить, какие именно правила существуют в политиках, загруженных сейчас, но, стоит повторить, отлаживать политики selinux – крайне нетривиальная задача.
Итак, если большая часть системных администраторов политики не пишет, в книге про написание политик единственная глава, то о чём же вообще эта книга?
В книге рассказывается про очень много подробностей применения SELinux, не требующих непосредственного написания политик.
Например, есть такая вещь как selinux boolean. Это (определяемые в политиках) переменные, в зависимости от которых политики могут вести себя по-разному. Зачем они вообще нужны? Почему нельзя просто написать другую политику? Ну, видимо, как раз потому, что политика по-умолчанию поставляется в двоичном виде и недоступна для редактирования обычному администратору.
Также в книге рассказывается, например, как сделать так, чтобы у UNIX-пользователя был правильный SELinux-пользователь при запуске процесса, и да, к сожалению, это требуется делать вручную.
Рассказывается, как включить или отключить SELinux целиком, или включить его в permissive режиме, когда нарушения политик только учитываются, но не пресекаются.
Довольно хорошо изложена тема файловых контекстов, с командами matchpathcon, chcon, runcon, позволяющими проверять, какими метками должен обладать какой объект.
Также вкратце изложено взаимодействие SELinux с некоторыми подсистемами Linux, которые помогуют устанавливать правильные контексты на объекты, до которых ядро не может "дотянуться", такими как systemd, dbus, PAM.
Целая глава посвящена взаимодействию SELinux с Docker и libvirt. Честно говоря, на первый взгляд, это кажется излишним, потому что для того Docker и libvirt и придумывались, в частности, чтобы не заниматься излишней детализацией безопасности, положившись на изоляцию, однако, полезно знать, что таковая поддержка существует.
О чём в книге не сказано почти ничего, это о том, как работать с SELinux на Android. Android – это второй по очевидности кейс применения SELinux, а по числу устройств, быть может, и первый. Тем не менее, про него в книге сказано очень мало, и эти знания надо добирать из каких-то других источников.
В итоге, по моим подсчётам получается, что для того, чтобы хорошо понимать SELinux, нужно прочитать как минимум 5 книг:
Много? Много.
И в завершение этого обзора-необзора, могу сказать, что какого-нибудь способа начать использовать SELinux "малой кровью" я не нашёл. То есть, вероятно, в качестве "детского примера" можно заизолировать nginx, установленный на стандартном RedHat, с помощью стандартной политики, и получить таким образом галочку в резюме, однако, я лично не стал бы считать это настоящим достижением.
这篇文章介绍我的关于Michael W Lucas的小技术书的观点。
我是Michael W Lucas的粉丝。 虽然他的大部分书都很简单,并且覆盖了很多其他方法也可以找到的内容,但是我认为掌握电脑技巧 不应该做比必要做的痛苦的更痛。 我打算阅读他的所有关于电脑系统管理系列的所有的书。
我在很久之前使用的操作系统,Slackware不包括可插拔认证模块(PAM)。 但是最新的15.0版本,终于包括这个部分。
因为我认为,用户必要可控制他的整个电脑,所以我把学习PAM放在了我的任务单里。 此外,我听说PAM是最自然的组织用户数据的加密方法。 PAM本身不做任何加密,但是PAM是一种钩子。 系统提供管理用户此钩子,是为了让他登录之前激活某些系统级别服务,比如解密。
官方的说明书一般既是免费的,又覆盖模块的所有的功能。 网络HOWTO通常能很清楚的指导操作者如何得到某些结果。
为什么我们还是需要额外的知识资源? 尤其是付费的?
我的回答是这样:因为只有拥有(至少一点)情节和动机的书才会给阅读者综合概述。 换句话说,官方说明书拥有宽度,但是没有“为什么我们需要本模块”的动机。 网络HOWO有动机,但是没有宽度。 而技术书同时拥有这i两种优点。
某天我听过一个商业想法:为学者大声阅读官方的说明书。 这是一个笑话,但是它提出了一个很好的问题,人们如何使用自己写得很清楚、很容易理解的说明书的技能赚钱?
Michael W Lucas是我鲜少见过的成功案例之一。
简单的说,PAM是一套UNIX系统插件。 它可以在用户登录系统之前执行几个操作。 按照这个几个操作结果,系统会选择允许或者拒绝登录。
“可以插拔”这里比较重要。 它的意义是:管理员会安装系统之后改变或修正系统认证方法。
但是这套程序模块不必真的实现认证功能。 从认证角度来说它会一直返回:“允许”,但是在后台执行某些额外的功能。
为什么需要这么做? 因为这样系统能按照管理员需求准备用户环境。 (比如解密他的文件。)
其实PAM本来不是Linux的分系统。 它来自Solaris,然后开源社会按照Solaris模式,实现了两个PAM版本,Linux-PAM和OpenPAM。 它们两个不是完全兼容的,但是PAM Mastery两个都覆盖。
这两个实现的重要区别是:Linux-PAM可以做更多用户环境初始化。 所以Linux-PAM管理员应该更努力计划PAM部署。
Linux-PAM最需要考虑的方面是:
在Linux中,一般情况下,用户环境配置是在BASH初始化脚本(bashrc)里做的。 所以为了使用bashrc,它应该是存在的。
如果你的用户目录是在其他的计算机(比如NAS), 或者用户目录被加密,bashrc也是跟它加密在一起?
PAM就是答案。管理员(在私家电脑就是用户)可以找到和安装PAM模块。 PAM模块会准备好用户环境。
在大公司PAM设置不是用户可控的,但是bashrc像私家电脑一样,用户会改变。 所以如果公司(管理员)要把某些限制强加于用户上,PAM是最适合的地方。
我比较喜欢这本书的逻辑。
Lucas首先介绍PAM的历史,然后描述PAM标准的认证步骤。 它包括“认证”(auth),“用户”(account),“环境”(session)和“密码”(password)四个步骤。
然后他介绍四个步骤的属性和常规、实现正常登录流程的模块。(比如pam_unix.so)
后来它花费两章描述Linux-PAM和openpam的细节。 这里很重要的是Linux-PAM的错误处理过程。 它比OpenPAM详细很多,所以可以非常仔细的个性化认证逻辑。
他花费一章描述PAM错误编码,因为错误分析是调试登录流程的很重要的一部分。 其实,调试应当自己占据一章。 Lucas介绍了pam_printf,pam_exec和其他辅助调试的模块。
在这本书最后一部分,它介绍PAM和SSH的合作。 所有的管理员使用SSH远程登录,但是登录和打开SSH钥匙环需要两个密码。 PAM让你使用你的SSH钥匙环密码登录。
TODO
我感觉这本书最大的缺陷是它完全忽略了Linux的命名空间(namespace), 即使PAM也是初始化namespace的最自然的地方之一。
比如用户可能需要给某些进程限制文件系统浏览权限。
普通的Linux限制框架能拒绝访问,但是不能限制浏览。
但是如果进程管理服务程序通过PAM换身份,成为具备另外一个文件系统namespace的进程, 该进程就会有被限制的文件系统观点。
Lucas的语言很流畅。 难怪他除了技术书以外,还写侦探小说。 他会使用可笑的例子,比如(虚构的)PAM模块pam_breathalyzer,会拒绝喝醉的用户登录。
这本书的难度,我会说非常的低。 虽然这个话题很微妙,但是并不是很难。
Lucas写的大部分书覆盖的都是比较小、比较简单的话题。 但是如果你把所有的书堆积起来,它仍然会达到几十厘米的厚度,累积几千页面的页数。
电脑每年变得越来越复杂。 不能说难,因为大部分系统管理逻辑很简单,但是很多系统同时跑、同时交互一贯的会生成复杂度。
一般努力的学生会说,哎,这个很简单,那个很简单,那个更简单,但是放在一起,突然发现用一年来学习所有的内容都不够。
我可以说,如果你真的想要成为自己电脑的主人,理解PAM是没办法避免的。 也可以说为了理解PAM,本说明书是足够的。 所以我可以推荐它。 不能说这本书一定会很扩展你们的知识,但是它会扩展你的外部脑的操作逻辑一部分。
如果您在此博客或其他页面中发现任何对您有用的内容,请订阅并打赏。 请您转发、分享和讨论,您的反馈可以帮助我变得更好。
I wrote this file post-factum, after already publishing the review. Therefore, the remarks section is empty.
See the review file: 2020-07-14_Review-on-The-Light-that-Failed_Ivan-Krastev-and-Stephen-Holmes.html
I (lockywolf) copied these words from the main planner file.
0 | Word | Translation |
---|---|---|
1 | feckless | тщетный |
2 | undercut | сбить цену |
3 | tellingly | красноречиво |
4 | once-fêted | когда-то чествованный |
5 | lustre | глянец, блеск |
6 | topple | опрокинуть |
7 | bullwark | насыпь, бастион |
8 | loosed | made more loosed |
9 | genuflect | преклонять колени |
10 | trite | банальный |
11 | firmament | небосвод |
12 | stand-off | ничья, пат |
13 | fraught | преисполненный |
14 | veering | контролируемо ослаблять |
15 | fomented | разжигать, подстрекать |
16 | disparaged | преуменьшать |
17 | malaise | недомогание |
18 | perfunctory | небрежный |
19 | mimetic | подражательный |
20 | glib | бойкий |
21 | contentious | сварливый |
22 | prescient | предвидящий |
23 | swathe | обмотанный слоями |
24 | feign | симулировать |
25 | rankling | нагноение, (фиг. обида) |
26 | imperious | властный, повелительный |
27 | erstwhile | былой |
28 | irksome | надоедливый |
29 | ricketiness | покосившеся (нар.) |
30 | supercilious | высокомерный |
31 | condescendingly | снисходительно |
32 | travail | тяжкий труд |
33 | apprehension | опасение |
34 | mingle | смешение |
35 | roil | муть |
36 | crassly | несуразный |
37 | seethe | бурлить |
38 | indelibly | неизгладимо |
39 | prescient | предвидящий |
40 | capsize | опрокинуть |
41 | taproot | стержневой корень |
42 | scurrilous | непристойный |
43 | offhand | экспромтом |
44 | implacable | неумолимый |
45 | root-and-branch | целиком и полностью |
46 | pungently | пикантно |
47 | wily | коварно |
48 | hoodwink | обмануть, провести |
49 | inapposite | неуместный |
50 | excoriation | разнос, жёсткая критика |
51 | condescendingly | снисходительно |
52 | presciently | прозорливо |
53 | grist | помол зерна/солода |
54 | fomented | разожжённый |
55 | throe | агония |
56 | bait-and-switch | обман путём предложения товара по неверной цене |
57 | foisted | навязанный |
58 | onerous | обременительны |
59 | gratuitously | беспричинно |
60 | repudiation | отказ, отречение |
61 | heinous | гнусный |
62 | denigrate | порочить |
63 | humdrum | банальный |
64 | incongruous | нелепый |
65 | tatty | безвкусный |
66 | grisly | скверный |
67 | underhandedness | скрытое нечестное поведение |
68 | congruous | гармоничный |
69 | lofty | возвышенный |
70 | afflict | тревожить, заражать, влиять в плохом смысле |
71 | gnawing | подтачивать |
72 | fulminate | гремучая (смесь) |
73 | wrought | кованый |
74 | candour | откровенность |
75 | gabfest | торжество словоблудия |
76 | rattle | греметь |
77 | heinous | гнусный |
78 | fluster | слегка возбудиться |
79 | uncouth | неотёсанный |
80 | patrimony | вотчина |
81 | scoffing | насмешливый |
82 | dishearten | привести в уныние |
83 | aсquiesce | неохотно согласиться |
84 | muzzled | в наморднике |
85 | condescension | снисходительность |
86 | indelible | неизгладимый |
87 | preposterous | нелепый |
88 | pent-up | сдерживаемый |
89 | galling | раздражение (типа кожи, также фиг.) |
90 | pillory | выставить на осмеяние |
91 | extirpation | искоренение |
92 | volte-face | резкая перемена |
93 | segue | переход |
94 | bristling | ощетинившийся |
95 | complacent | самодовольный |
96 | scathing | уничтожающий |
97 | pungently | пикантно |
98 | truculent | свирепый |
99 | conceit | тщеславный |
100 | gnawing | грызущий |
101 | shabbily | затрапезно |
102 | preponderance | преобладание |
103 | vainglorious | тщеславный |
104 | inexorably | неумолимо |
105 | disavowing | отрекаясь |
106 | pent-up | сдерживаемый |
107 | stave off | предотвратить |
108 | papered | обёрнутый |
109 | demeaning | унизительный |
110 | makeshift | импровизированный |
111 | fending off | парировать |
112 | sneering | насмешливый |
113 | ingratiating | льстивый |
114 | bequeath | завещать |
115 | spinmeister | пропагандист |
116 | inchoate | незавершённый |
117 | heyday | рассвет |
118 | mendacity | лживость |
119 | subliminal | подсознательный |
120 | gimmicks and ruses | “трюки и трюки?” |
121 | hobbling | хромать |
122 | bungling | головотяпство |
123 | faltering | прерывистый |
124 | thwarter | пресечь |
125 | dizzying | головокружительный |
126 | denouement | развязка |
127 | paean | победная песня |
128 | cosying | успокаивание |
129 | strut | подпорка |
130 | flout | попирать |
131 | moonstruck | помешанный |
132 | scornful | презрительный |
133 | vaunt | превозносить |
134 | impelled | побуждать |
135 | pollen | пыльца |
136 | stammering | заикание |
137 | abet | соучастие |
138 | impute | вменить, приписать |
139 | ratchet up | усиливать |
141 | chicanery | кляузничество |
142 | cribbing | списывание (на уроках) |
143 | gauche | неловкий |
144 | boorish | невоспитанный |
145 | jeering | глумление |
146 | clutter | беспорядок, суматоха |
147 | gaudy | безвкусный |
148 | basking | (гигантский?) |
149 | denigrating | клеветнический |
150 | fluke | счастливая случайность, неудача |
151 | meandering | извилистый |
152 | replete | переполненный |
153 | self-effacing | скромный |
154 | repudiation | отрицание |
155 | airy | мечтательный, ветреный |
156 | conceit | тщеславие |
157 | candour | откровенность |
158 | jingoistic | экстремально патриотичный |
159 | strewn | усыпанный |
160 | clout | лоскут |
161 | engender | породить |
162 | hamstrung | с подрезанными крыльями |
163 | interloper | тот, кто вмешивается в дела |
164 | fortuitously | случайно |
165 | stupendous | колоссальной важности |
166 | misbegotten | рождённый по ошибке |
167 | to irk | раздражать |
168 | to contend | соперничать |
169 | subliminal | подсознательный |
170 | underhanded | коварный |
171 | fray | износ |
172 | insouciance | беззаботность |
173 | bamboozle | надувать |
174 | connivance | попустительство |
175 | enraptured | приводить в восторг |
176 | foment | разжигать |
177 | profligate | расточительный |
178 | posterity | последующие поколения |
179 | unfazed | невозмутимый |
180 | precipitous | обрывистый |
181 | curry favour | добиться одобрения лестью |
182 | working stiff | те, кто работают |
183 | roiling | бурливый, мутный |
184 | sapper | сапёр |
185 | consanguinity | единокровность |
186 | coddling | нянчить |
187 | prevaricate | увиливать |
188 | untoward | неблагоприятный |
189 | dish out | выгибаться |
190 | brashly | порывисто |
191 | duped | обманутый |
192 | candid | беспристрастный |
193 | rowdy | дебошир |
194 | epitomise | резюмировать |
195 | mendacity | лживость |
196 | loosed | “подрасслабить” |
197 | take up | принятся за |
198 | adjudication | приговор |
199 | sidle | ходить бочком |
200 | titillate | щекотать |
201 | excoriate | сделать ссадину |
202 | momentous | важный |
203 | flagrant | вопиющий |
204 | foment | поджигать |
205 | go to the brink | дойти до грани |
206 | wager | ставка |
207 | hindsight | “глядя в прошлое” |
208 | repudiated | аннулировать, отречься |
209 | pernicious | пагубный |
210 | unflinching | неустрашимый |
211 | flipside | оборотная сторона |
212 | hoist | подъёмник |
213 | eschew | избегать, сторониться |
214 | pent-up | сдерживать |
215 | caller of the shots | тот, кто отвечает |
216 | assay | анализ |
217 | feign | симулировать |
218 | copious | обильный |
219 | fray | изношенный |
220 | coaxing | уговаривание |
221 | daub | штукатурка |
222 | cheek-by-jowl | плечом к плечу |
223 | jostling | тесниться |
Makes sense, because this word essentially delineates what is allowed and what is not.
Narcissistic defenses are those processes whereby the idealized aspects of the self are preserved, and its limitations denied. They tend to be rigid and totalistic. (Wikipedia)
Wrong, right? Academics may have that all the time, but hey, most people do not imagine self-sufficiency.
Very important sentence! C.f. Zhu JinNing (Chin-Ning Chu) for "Art of War for Women". Women's strength is in their weakness.
Philosophically or practically? I mean, yes, sort of, but there is constant confusion of ideal with real.
She actually understands this! How on Earth is she expecting to both dismantle a hierarchy (noble in several respects a goal) and keep insisting on the inevitability of interdependence.
If modern feminists (especially Russian) were to try reading what she's writing, they'd anathemise her.
Because people cannot be resurrected, whereas social constructs can, to a huge degree! People are not generally substituteable, but social constructs are! Yes, you can re-marry, but you cannot choose a different mother.
This book is very messy. It uses a very obscure language, and it is missing supporting links for several critical statements. (It does provide references to many other statements.) Even the amount of words I had to write down into the "learn later" list was way smaller than similar texts give me, as the confusion was mainly coming from the misuse of easier words, rather than from using many complicated ones.
As the book is badly written, and extracting the meaning from the text is very hard, this review would have to be in the form of "what thoughts the book made think", rather than "what is the book about".
So, the whole book is based on the concept of "imaginary". This book is more about a Utopian fantasy than about anything existing. Butler herself is supporting her right to do so with a weakest supporting argument of all times "would you like to live in a world in which no-one thinks about such a development perspective". Indeed, this can be rephrased as "you what, care too much about what I write?". It is basically an appellation to Freedom of Speech, that is a last resort.
I do grant her that right, but then I would rather have to be using this text not as something describing anything working, but rather as an exercise in reading and extracting whatever meaningful thoughts there may be in a deliberately obscured text. An exercise in philosophising rather than an exercise in philosophy.
The author seems to be determined that the human world in general can be described using essentially two main tools: politics and psychology. Not digging very deeply, she seems to be using Freud and his theory as a source of psychological theory, and Hobbes as the source of political theory. While both are very respectable founders of the fields, it staggered me she seems to be regarding them so highly even being aware (?) of the modern state of research. The idea here, perhaps, is that politics describes human behaviour "en masse", whereas psychology describes interpersonal (she uses the word "dyadic", which is already strange enough) interactions. But, it's obvious, but someone has to say that: human societies are so much more complex than just one-to-one and one-to-many interactions!
Butler seems to love Freud. It is a little surprising, given that Freud is not, probably, the most advanced source of psychological knowledge nowadays. Indeed, he was extremely instrumental in founding the field, but that was so long ago, and much more substantive research has been done since.
She also seems to be focused mostly on Hobbes when looking at political theory. She acknowledges the existence of Locke and Rousseau, but very superficially, and mostly with respect to the "state of nature". (Rousseau is mentioned twice less than Hobbes, and Locke's theory is even completely ignored. Furthermore, why do we even need the "state of nature" in this discussion?) And again, why are we focusing on the founders so much more than on the ones who attempted to improve the theory?
She tries to "imagine" the world that is non-violent (which is a very confusing an convoluted term that she spends a lot of time describing). This world, she argues, has to be based on the "ethics of interdependence". The value of life, she argues, is based on the "grievability" by other people, then.
The argument she's trying to build, jumps from the imaginary "freedom as independence" into the "freedom of total inter-dependence". It jumps as if there is no middle ground.
She does not completely ignore the existence of groups, but entirely ignores the very concept of group dependence. Indeed, people cannot be fully independent, but total dependence is also not how things work. Freedom (the word she doesn't use a lot) means that people choose whom upon to depend.
The groups that appear in her text are mostly the groups of similarity, and most often the groups of blood relatives. She also speaks about the groups of grievability and groups of power, but almost entirely ignores the groups of friendship and cooperation. Whereas those constitute those groups that are actually worth living for.
The concept of "living" plays a large role in her argument, and she writes a lot of words to try and describe what it is to be alive, and how we regret the loss of life. She even proposes to value lives according to how much we would regret the loss of those lives…
But why would we even do that? That's very human and emotional to grieve after the loss. But that very notion has been long proven to be one of the least useful in the world. As an English proverb says "There's no sense crying over spilt milk."
The value of the life that is already lost is then known precisely, and equal to zero.
The concept of imagination is used a lot in her treatise. This is indeed a place where this book has proven to be useful. Indeed the result of her imagination I consider to be worthless, but the way she self-reflects on imagining the world, and also tries to model the way other people imagine the world, made me think a lot.
I have never really thought about the "imagination machinery" in the human brain. And I really like the concept of phantasy, distinct from fantasy by the presence of subconscious component. I really liked to think about different kinds of imagination as in: imagining scenes, imagining words, both written and spoken, imagining 2d objects, imagining feelings, and much more.
I think that the "imagination software" in the brain is really worth exploring.
The part of the treatise that is dedicated to saving and killing largely rotates about the desire to destroy and the desire to save as given by Freud. I cannot say that I can distil any meaningful conclusion from her words. Moreover, all the discussion seems very contrived and produced only in the name of deriving certain political slogans of the day. That is, it looks (to me) as largely just fitting the argument to the answer that the author already believes to be true. The very structure that is dedicated to preserving life, she considers to be a manifestation of a "dominance hierarchy" that is worth bringing down. Indeed, often such structures become corrupt, but she produced no decent substituting concept, besides doing some mental gymnastics modelled after Kant and Freud.
She does give a great account on the police in the USA killing people, especially black people. The language of those parts of the book is much more lucid and provides a much more vivid image. This makes me think that as a political professional (e.g. a political campaign mastermind) she could have had a role that would fit her much better than the one of a professional philosopher.
This section will just list a few thoughts that I don't think actually fit into any reasonable piece of argument, but are worth scavenging from the book.
Obviously, as mentioned previously, she mentions Freud and Hobbes. As one of the Freud's successors, she speaks about Anna Klein. She cites almost all famous Marxists of the 21st century, starting from Althusser. Laplanche and Derrida, obviously, creep into the narrative too, how could they have not. Foucault and Fanon even get their own chapter.
Melanie Klein seems to be the only psychologist that she seriously considers, besides Freud. Perhaps, worth looking into.
The most disappointing part of the book is that the actual analysis of the non-violent action is given as little attention as possibly can be given so long that the books still bears some relevance to the non-violent protest. Apart from "using human bodies as a wall" and "coming to the shores of Europe in boats full of people", not much is spoken about ethicality (or the absence thereof) of different kinds of peaceful protest. Strategy, tactics, effectiveness – all of that is mostly ignored, apart from insisting that the structural violence itself would always try to present peaceful action as violent. As if we wouldn't know that. A lot is said about self-defence, but very little about extralegal defence of the others. In particular, the subject of defending humans against dangerous forces of nature is completely ignored.
When you are approaching someone, who has been well-known as an adversary of the forces you generally sympathise with, there may be several expectations. You may expect the text to be an outrageous demagoguery, aimed at appealing to emotions and ignoring any traces of rational. This at least gives you the feeling of guilty pleasure by imagining a picture of a wild combat of ideas. You may expect a cleverly twisted argument, that is crafted so well that you find it very hard to penetrate the logic and to find flaws. Then it is upon you to sharpen your mind in order to have a proper duel with your opponent. You can expect to be mistaken, and be exposed to the ideas that have not yet had a place in your mind, and that would be the best possible option. Getting enlightened, after all, is one of the best feelings in the world.
What you probably do not expect, although you should, is the book to be just lacking any sort of cohesive picture in it. Is it of the aspects of the "banality of evil" that Hannah Arendt was writing about? "The Force of Non-Violence" is from the last category. Of course, I am not equating Butler with any of the horrible evil-doers of the world.
But the book still leaves me with that creeping in feeling: "How can it be that someone who manages to take simple things and express them in a totally incomprehensible way happens to be one of the most prominent philosophers of our time?"
It leaves you with a feeling that you have missed something. Is that "something" ultimately incomprehensible to just that kind of people that you belong to?
But no, over and over I keep seeing in this book only an exercise in philosophising and nothing else. Chunks of not very consistent reasoning interspersed with literature reviews of various philosophers and journalists. Attempts to make a well-structured text that keep failing over and over.
On the other hand, at least it has made me produce the longest so far book review I have made. At least that I should be grateful for.
I have read “Spin Dictators” by Sergei Guriev and Daniel Treisman. My review is below.
The book took me 7 hours 22 minutes to read, that is 442 minutes. With about 220 pages of readable text (the whole book is almost twice larger, but the rest contains mostly references), this makes it about 2 minutes per page. Not exactly a book for slow reading.
Why is that?
When reading “Spin Dictators”, I couldn’t get rid of a feeling that I had already heard most of the propositions made. Where? In the Russian opposition-leaning media, for the most part, as well as the Western media, mostly left-leaning.
This made me… be critical about the text. I guess I have to give this disclaimer, because to an extent it means that I cannot review the book in an unbiased way. Not because I am pre-disposed to the book, but because I just have had too much exposure to a partisan political agenda.
Does it mean that things said there are a priori false? Not at all, after all, political agendas are sometimes built on genuine understanding, and in the case of “Spin Dictators”, most claims are supported by evidence, even though I haven’t bothered to verify that evidence. However, it did make me approach the text from a critical viewpoint.
So what the authors are saying can be roughly summarised as the following: since the last quarter of the twentieth century, dictatorships are much more based on manipulating and misleading people, rather than on inflicting fear upon them.
The first part of the book defines what a “Spin Dictatorship” is more precisely, and continues to describe its properties, such as its paradigmatic policies to democracy, international relations, propaganda, repression, censorship.
The seconds part of the book tries to establish how those “Spin Dictatorship” appeared, how they might evolve, and how democratic states should work with them.
Overall, this book left me with a feeling of unease. I cannot specify exactly where and why. Those interested may have a look at the notes in the next section of this file.
Perhaps, the most disturbing thought for me is the authors’ firm belief in “international institutions”. After all, international institutions are just institutions, prone to all problems of bureaucratic organisations.
One more thing that bothers me is a really slacky attitude to sovereignty. I mean, naturally, some countries are richer than others. But that approach “do what we tell, and only then we will help you” sounds too fragile to actually work as intended.
Also, they mention that presently countries have about 43% of their economies being used for import-export. This sounds way off from being reasonable. I mean, I like Japanese knives, but do I want to have no domestic knives in a shop nearby? I doubt.
Similarly, I find it hard to believe that “progress” can be achieved by instilling it into people by the more progressive. Something just doesn’t sound right here. Without independence how can there be adulthood?
It is not normal for people to rebel. People have an “emotional barrier” before they allow that violence to rise up. (?)
Is not that democracy?
TODO: thought! Maybe the content of the propaganda does not matter whatsoever? Maybe the mere presence is enough? Make people always have you onto their mind?
REALLY???
FFS. I didn’t know that.
Very interesting. (TODO?) I doubt the values.
Hm… ? I need to read Foucault.
Easy to process, but hard to keep the brain focused.
Indeed! Making people confused is more efficient than making them scared of something definite.
Regulating something is a surer way to kill something than outright prohibiting it.
!!!
Very important. Corrosion should be subtle.
More or less lists the things that are used.
Well, dictators use both mobilising and de-mobilising propaganda.
“American cinema and thrillers.” (sic!)
Important!
Can a “spin dictatorship” really exist in a closed system?
Make it more confusing!
Very important!
Very important!
Note: read Eisenhower’s speeches for colourful epithets.
Alberto Fujimori
Why is any of that important?
Fear dictatorships censor openly, Spin dictatorships censor covertly.
Hugo Chávez
Dictators use fraud to increase their outcome from 55% to 65%.
What is this chapter about? International relations? Emigration?
Wow.
Spin Dictatorship usually do not fight wars. Except Putin, and even he tried to make them short.
Can pro-democratic opposition do the same?
What is modernisation and why it happens?
“Coalitions of states form to promote the respect for human rights”? Seriously? Are you kidding me? This is the single point that I don’t find plausible as an argument in any way.
Why exactly do they pressure for genuine democracy?
Seemingly, the pressure from the outside should be the pressure by threat. But here there seems to be no threat.
Where are the stimuli? Or the energy balance.
The “woke left academia” is, seemingly, proving otherwise.
To make VPNs :).
What about the balcanisation of the Internet? What about COVID?
Assuming this is correct, I would say that 43% is insanely high. Nobody would be happy about such a proportion.
How about distinguishing between “human rights” and “citizen rights”? I think it is conflated a lot in this chapter.
Dictators? Really? Why do Guriev and Treisman mentions displaced dictators I have never heard of? The Mexican on and the Ivorian one.
Which makes them poorer and less likely to rebel?
What about foreign military aid from not USA? Say, Iran?
Very interesting. Are these guys permanently in debt, or somehow manage to get out?
Do as we say, and we will give you the money?
Still, what about going the North Korea way?
What about closing the damn universities?
(keep in mind)
Because that is distracting from domestic affairs?
Has this changed after Covid?
Are they even independent countries in reality?
Really? Why? One body can be dealt with. Many competing bodies cannot.
Okay, so the “West” should use the “idea of democracy” to educate the world.
[0/0]
This book review is going to be really short, well, partly because the book itself is short. However, short here does not imply lack of value. I have discovered that the books by Michael W Lucas always seem to play a role that is at the same time very niche, and very valuable. So I have taken writing this review as an opportunity to also reflect on this niche.
What is "Networking for Systems Administrators"? Well, basically it is an introduction into computer networks for people who are otherwise totally unfamiliar with either computers or networks.
Don't be misled by the words "Systems Administrators" or even the fact that the book is from the "Mastery" series. This is more of an irony than a real claim for domination.
Books from the "Mastery" series are mostly very simple and targeted at the most entry-level audience. In fact, they contain even less information that the official manual pages or those guides that you can find on Google in hundreds, spending an hour.
Does it discount their value? Not in the slightest. Their biggest advantage is that they, as opposed to most of the cheap quick howtos with clickbaity titles, are actually correct.
Also, those books generally have a nice style bearing resemblance to fiction books, possibly the detective stories that Lucas has been writing as a different branch of his literary career.
There are two answers to this question. The first would be power users, a term invented by Microsoft to describe people most likely to screw up their carefully crafted heuristics. The second has a slight flavour of national colour, and is hard to translate in English, but let me go with an occasional-ism "anykeyster" - the IT department employee who helps people to find the "Any" key on their keyboard.
So, after laughing in the audience of my enlightened friends has abated, let me try and convince you that this is, really, the most sensitive, the most vulnerable, and the most in need of community support group of computer users.
Why is that? Because they are already curious enough to start digging into the subject matter, but not yet experienced enough to distinguish truth from obvious lies. They are skilful enough to break things, but not skilful enough to repair them.
So good introductory books on, basically, any subject are of critical importance to any healthy technical field, and especially to automated thinking.
Let me start from the topics the book does not cover. The book speaks nothing about a tool which most of us have probably toyed with, pretending to be savvy hackers - Nmap. The book does mention it, so curious minds might very well get enticed, and try to follow the link, but I think that it could have been covered more extensively.
So now let me try and list the things that are touched in the book with noticeable attention:
Ah, okay, one thing that is also missing from the book is firewalling and packet rewriting. The book has a whole chapter dedicated to it, but it does not nearly give enough practical knowledge to actually apply it to your own machine.
The rest seven topics can be called the staple of networking, and Lucas faithfully covers the basics of each one, which, I would hope, would make the user less misguided by poor googleable howtos and ready to ask Google further questions.
The depth of coverage for each topic is really not that great, but the most basic examples of command-line usage for each set of commands corresponding to each layer of networking are properly given.
Myself, even though I have spend significant effort debugging networks of Linux,
BSD, and Windows systems, have found certain command incantations that I haven't
used as much as I should have, for example, interface error statistics ~ip -s
link~
, and the one that abuses ~netcat~
to forward shell access (find it in the
book).
The style, as mentioned before, is light, with jokes and funny but plausible examples from real life, which makes reading much less boring than your average thick book on networking or programming.
Indeed, the book is not thick at all. It took me three hours to read it, although I have to admit, a lot of content evaded my eyes because I had already known it, and was glossing over. I suspect that for a newbie it might take about three times more, which is still barely more than a single working day.
Shall I recommend this book to an experienced network administrator or programmer? No, not really. But even for them it could be a useful tool, to be handed out to the biggest troublemakers on the network in order to develop a common language.
But I shall definitely recommend this book to everyone who is either just starting with computer networking, or has been misled by the companies and governments promising "ease and convenience".
Subscribe and donate if you find anything in this blog and/or other pages useful. Repost, share and discuss, feedback helps me become better.
I also have:
In this document I want to review a book my Chris Sanders, “Practical Packet Analysis”, published by “No Starch Press”. It took me about a week to read the whole book, and actually more time to revise what I have learnt and to write a review.
So, what made me want to write about a book on packet analysis, and what is packet analysis in the first place? Well, speaking non-strictly, packets are the words of the Internet. Small-ish digestible pieces of information that computers exchange among themselves. Digestible here is a very good metaphor.
Suppose you are having a dinner, and together with your main food, you also have a glass of soda. Well, soda water is liquid, and can be consumed, in theory, continuously. However, it is hard to drink something continuously for a long time. People tend to swallow even liquid products gulp by gulp – that is, in portions.
Computers in this respect are surprisingly not different to humans. Even when communication appears to be continuous, (such as with the TCP/IP protocol), in reality it goes in small portions called “packets”.
Why would we need to analyse those packets at all? When, when communication goes well, and not problems are perceived, packet analysis is only needed by the networking professionals, for their professional purposes. However, when thing go South, even people who are non-experts, but simply want to debug their Internet connection, might need to try and see what is actually going in and out of their network card – packets.
Another case when packet analysis may be of value is when one is writing a distributed application (which I have been doing at my job), and needs to check whether network load is distributed evenly, or at least rationally over the network.
I ended up reading Chris Sanders’ book after skimming through a few similar ones in a library, and I have to say, there were quite a few alternatives. In fact, taught by my previous bitter experience, I had suspected that I would have to read more than one book on the topic. However, jumping ahead, I have to say that that is what really happened, but also that Practical Packet Analysis (PPA) turned out to be good enough of a book so that I didn’t have to read those additional books in their entirety; I only needed to consume a few chapters filling in the lacunae.
One more thing that I used as a heuristic when choosing PPA is that it is published by “No Starch Press”. I do not know how this is happening, but No Starch guys seem to always be able to publish really good handbooks on technologies, even if for sometimes very exotic technologies. (The only recent book on GNU Autotools is published by them!)
So, Practical Packet Analysis turned out to be a handy, simple, and, indeed, practical book. It does very little about explaining TCP/IP itself, offloading this duty to other books, for example, to “TCP/IP Illustrated” and “Internetworking with TCP/IP”.
What it does, however, it covers practical cases that a systems administrator debugging his system might be interested in: slow network, address conflicts, packet loss, name resolution failures, traffic hijacking by adversaries.
It does it by gradually, bit by bit, unfolding the GUI of the Wireshark packet capture tool. As Wireshark’s interface is quite naturally adapted for debugging certain kinds of network failures, going over it, screen after screen, allows to, at the same time, explain Wireshark’s usage, and comment on which issues led to the appearance of those screens, and how to resolve them.
For the cases when Wireshark does not have a suitable GUI page, it is possible to do simple packet capture, and to hand over the analysis of those packet dumps to some other tool.
Unfortunately, PPA does not talk much about analysing packets with a script. It does mention, thought that Wireshark has a plugin framework, and supports applying packet analysing scripts (which can be written in different languages) to packet streams being captured.
It also covers in the console usage of tshark (TUI Wireshark). I even ended up using it for some of my tasks, because some machines do not have a GUI. In addition to tshark, the same chapter also discusses tcpdump, the only other tool in the book.
I was a bit disappointed by the fact that the book actually is saying almost nothing about reverse-engineering protocols. Its basic assumption is that whatever is going within a network either has been already defined in the Wireshark library, or has a description, say, in asn1, which can be converted to a Wireshark packet dissector with a little work.
However, the library that does network communication in my case, and which I need to debug, is not transparent, its protocol is not known, so there is no ready to use packet dissector. The book, however, does refer to the Wireshark’s official manual, which, seemingly, is quite decent and even exists in two volumes, one for the GUI, and one for writing dissectors, so the direction to head on after the end of the book becomes clear.
It also does not cover Wireshark’s GUI entirely, some features, were left as an exercise to reader. I do hope that they will be easy enough to grasp by oneself.
The book is not hard, allows solving certain frequently seen problems at first reading, and gives decent references on where to continue the study of network programming.
I am a little disappointed that it does not mention mausezahn, the tool used by Linux kernel developers for testing network, and does not teach how to write a packet dissector, for example, for some very simple protocol.
I would recommend it as a homework for beginner network programmers, and network administrators.
Subscribe and donate if you find anything in this blog and/or other pages useful. Repost, share and discuss, feedback helps me become better.
I also have:
Some programs become mundane five minutes after they are installed. For example, ICQ, or any other messaging system. They look like they have always been there even if they are brand new.
Others may have been out for ages, but still leave you with a feeling of awe and fun. This document lists a subset of the second kind.
I have read “The Prince” by Niccolo Machiavelli. Feel free to read my review.
Books Books are, undoubtedly, very different. Some books are hard to read, some are easy. Some books are fun, some leave you with a feeling of sorrow. Some require preparation to be read, some can be read at once.
Context “The Prince” by Machiavelli is certainly from that kind which requires preparation. I should say, this was one of the most Google-intensive books I have read. He somehow expects his readers to be familiar with his contemporary history, as well as ancient history almost from the beginning of time.
Google Auspiciously, we have Google at our disposal, so we can compensate for the typical lack of context that a modern reader would have with respect to a book of some 500 years old.
Translation The book was translated by Ninian Hill Thomson, whom I have not found an abundance of information about, but apparently she was an Oxford University graduate, and a Scholar of British India law, living in the XIX century.
Language The translation language is a bit archaic, I believe, even for her times; this embellished the narrative quite substantively, but was a bit challenging at first for me, but I dealt with it in my usual manner. Having a vocabulary prepared made that read just one more exercise.
Narrative As opposed to the language, the narrative looks surprisingly fresh. I haven’t managed to resist writing out a lot of parallels with contemporary life and politics, of both the plots unwrapping in the times of Machiavelli, and the plots from long B.C.
Topics Indeed, it is hard to avoid covertly smiling when you see politicians in the XXI century making exactly the same mistakes as the ones from the XVI. It is also interesting to notice how many modern controversial discussion points Machiavelli is already aware of, 500 years ago. In particular, he touches the ideas of a “nation state”, “armed citizenry”, the “sources of political legitimacy” (although he, obviously, does not call them the way Max Weber does), “republics and autoritiarianisms”, “government by consent”, and even “importance of being ethical” (which he completely refutes).
References So, the book is discussing things that have not aged pretty much at all for the past 500 years, and if not for the necessity to Google a lot, would have been an easy (although not simple) read. Unfortunately, the tradition of making reference lists have not yet been rooted deeply by his time, so unequivocally identifying, who exactly it mentioning in his examples requires a little work, but the good thing is that, because The Prince is so well-known, there is plenty of commentary on the Internet that polyfill the lacunae.
Puzzle The obvious benefit of having so many references to historical events is that this book foments that web of knowledge that an erudite is expected to have, covering European history. We all kind of know what is Roman Empire, who is Alexander, and what was the difference between Guelfs and Ghibellines, but having been nudged into re-googling a lot about them helped me to put those disperse pieces of a jigsaw puzzle into a cohesive picture.
Depth With respect to the actual advices given to the aspiring princes, Machiavelli does a nice job of systematising many aspects of a prince’s life, but I cannot say that his study is sufficiently deep. Naturally, when a researcher is one of the first in his field, it is acceptable for him to do the broad stroke picture. Many other researchers of the nature of power have continued his track. Still, I would like to read much more about a prince’s interaction with his ministers and advisers, a single short chapter is just not enough.
Ethics One of the important points in his reasoning is the difference between “being” and “seeming”. Both should serve the purposes of the prince. This is what “machiavellianism” owns its name to. And hence the biggest controversy of this book. If it is hardly possible to maintain authority without telling the truth, is every authority ever immoral? And, this is the place where we see prominently the attitude that “effectiveness of things should be judged by experiment”.
Republics Machiavelli also speaks about republics, and I even found out that he has a prequel to The Prince, where he discusses republican power. He certainly understands the benefits of giving power to the people, such as stability, faithfulness of the army, and so on, although, I believe, a lot of his reasoning only applies to small city-states where social cohesion is much higher than in big countries, such as Russia. On the other hand, he seems to be positively convinced that only princedoms are really capable of big change, and this is, arguably, the main raison d’être for the book: to make an all-Italy reform.
Summary In general, I believe that The Prince is a very good book, which every aspiring tyrant should read. (I remember a rumour that Stalin had The Prince in his library.) Even a non-prince could benefit from it, trying to make a career in an office environment. History lovers would relish in its rich historical context. Excellent, enjoyable, and useful book, if you are not afraid to digress and do some googling.
Subscribe and donate if you find anything in this blog and/or other pages useful. Repost, share and discuss, feedback helps me become better.
I also have:
This documents is a notes file for reading Niccolo Machiavelli’s “Prince”.
Word | Translation |
---|---|
prudence | wisdom in the way of caution an provision |
weir( and mole) | запруда |
(weir and) mole | кротовый дренаж |
vicissitudes | regular change from one to another |
propitious | благосклонный |
impetuously | порывистый и импульсивный |
hardihood | unyielding boldness and daring |
pretexts | excuse, false reason |
audacity | boldness and or fearlessness |
scrupulous | exact and careful |
confer honour on | grant an honour tto |
heretofore | before now |
discerned | suspected to be discovered |
whence | from which place or source |
exactions | the act of demanding with authority |
imposts | tax, or exact duty |
pre-eminent | exceeding others in quality or rank |
hitherto | so far?, until now |
put X to rout | make them retreat chaotically |
inundation | overflow, deluge |
many authors are wont to set off | get accustomed, make X accustomed |
presumption | presupposition, belief |
wrest | выкручивание |
rooted out | искоренять |
thereto | to him, to it, to that |
distemper | вызывать хаос |
inchoate | just started, immature and thus chaotic |
sagacious | farsighted, with sound judgement |
rash game | risky, hasty |
wrought by | p.p. of work, caused by |
thither | to that point, to that place |
wherewith | with which |
trodden by | crushed by being walked on |
enervated | weakened and debilitated |
incredulity | disbelief |
entreaty | respectively ask, entertain |
borne in mind | p.p. of bear |
wherefore | because of what |
availed himself of smth | turn smth to the advantage of himself |
quelling | subdue or suppress |
waywardness | Obstinate, contrary and unpredictable |
desist (from a plan) | cease to proceed |
commotions | turbulent motion |
dissemble his designs | disguise or conceal |
exigencies | demand or requirement |
affable | friendly, courteous |
magnanimous | noble and generous |
dregs of the people | worst and lowest |
conjoin | v of conjugation, marry, unite |
imparting his design | give, disclose, share |
extricated | free from bounds |
connivance | conspiracy |
at a stroke | with a single effort, at once |
ill savour | bad taste |
relished | take pleasure in |
leniency | mercy or forgiveness |
pusillanimity | vice of being timid and cowardly |
old saw | old saying |
mire | swamp? |
assailant | attacker |
circumspection | caution, attention to all facts of a case |
rampart | оборонительный вал |
public magazines | storages (not journals!) |
victual | food for humans |
posted in leaguer | besiege |
forbear | keep away from, refrain |
rash and presumptuous | excessively self-confident, arrogant |
lintel | притолока |
well-nigh | almost, nearly |
valiant | possessing, showing courage |
eschew | avoid an idea |
fickle | quick to change opinion, insincere |
reproach | disgrace or shame |
wide asunder | torn apart, in halves |
betake himself | take yourself to do something |
haughty | arrogant, presumptuous |
crafty | deceiving |
facile (not firm) | lazy, simplistic, easy to convince |
grave | serious |
reckoned | considered |
incur the reproach | make oneself suffer disgrace |
sumptuous | magnificent, lavish, luxurious |
parsimony | thriftiness, stinginess |
rapacious | voracious, avaricious |
lavish | luxurious, superabundant, unrestrained |
ignominy | great dishonour |
rapine | plunder, pillage |
patrimony | наследство |
transcendent | surpass usual limits |
guard X from the toils | toil here means “trouble”, not “labour” |
discern | perceive, detect, distinguish |
dupe | deceived person |
asseverate | assert earnestly, confidently |
licentious | disregarding accepted rules (esp. sexual) |
ascribe | attribute smth to smb |
odium | hatred, dislike |
scorn | display disdain for something |
pronounce | declare formally |
imputations | charging, accusing someone |
begets hatred | to father, to produce, to cause |
dissensions | dissent, esp. spoken |
fomented | incite or encourage |
succour | aid or assistance, refuge, shelter |
had recourse to | use smth as a source of help |
stanch friend | persistent and loyal |
sagacity | the property of being a sage |
hearken to | listen and hear with attention |
vacillating | wavering, irresolute |
whencesoever | from whatever place |
height of folly | thoughtless action |
Apparently, was an Oxford University alumna, and scholar. She also seems to have studied British India law.
Discourses On Livy by Niccolò Machiavelli (1883) The Prince by Niccolò Machiavelli (1910)
https://en.wikipedia.org/wiki/Lorenzo_de%27_Medici,_Duke_of_Urbino Lorenzo di Piero de’ Medici (Italian pronunciation: [loˈrɛntso di ˈpjɛːro de ˈmɛːditʃi]; 12 September 1492 – 4 May 1519) was the ruler of Florence from 1516 until his death in 1519. He was also Duke of Urbino during the same period. His daughter Catherine de’ Medici became Queen Consort of France, while his illegitimate son, Alessandro de’ Medici, became the first Duke of Florence.
https://en.wikipedia.org/wiki/Francesco_I_Sforza#Issue Francesco I Sforza KG (Italian pronunciation: [franˈtʃesko ˈpriːmo ˈsfɔrtsa]; 23 July 1401 – 8 March 1466) was an Italian condottiero who founded the Sforza dynasty in the duchy of Milan, ruling as its (fourth) duke from 1450 until his death. In the 1420s, he participated in the War of L’Aquila and in the 1430s fought for the Papal States and Milan against Venice. Once war between Milan and Venice ended in 1441 under mediation by Sforza, he successfully invaded southern Italy alongside René of Anjou, pretender to the throne of Naples, and after that returned to Milan. He was instrumental in the Treaty of Lodi (1454) which ensured peace in the Italian realms for a time by ensuring a strategic balance of power. 2.10 is his son.
https://en.wikipedia.org/wiki/List_of_rulers_of_Milan#House_of_Sforza_(1st_rule) Taking advantage of the state’s weakness and the resurgent Guelph-Ghibelline conflict, the commander-in-chief of the Milanese forces, Francesco I Sforza, defected from Milan to Venice in 1448,[12] and two years later, after several side switches and cunning strategies, Sforza entered the city during Annunciation. He was then declared the new Duke of Milan by the City Council,[13] using as a claim his marriage with Bianca Maria Visconti, illegitimate daughter of Filippo Maria.
https://en.wikipedia.org/wiki/Duchy_of_Ferrara The Duchy of Ferrara (Latin: Ducatus Ferrariensis; Italian: Ducato di Ferrara; Emilian: Ducà ad Frara) was a state in what is now northern Italy. It consisted of about 1,100 km2 south of the lower Po River, stretching to the valley of the lower Reno River, including the city of Ferrara. The territory that was part of the Duchy was ruled by the House of Este from 1146 to 1597.[1]
In 1471, the territory was transferred to the Papal States. Borso d’Este, already Duke of Modena and Reggio, was created Duke of Ferrara by Pope Paul II. Borso and his successors ruled Ferrara as a quasi-sovereign state until 1597, when it came under direct papal rule.[2]
https://en.wikipedia.org/wiki/Duchy_of_Ferrara
https://en.wikipedia.org/wiki/War_of_Ferrara
“Alfonso married the notorious Lucrezia Borgia, and continued the war with Venice with success.”
https://en.wikipedia.org/wiki/Duchy_of_Ferrara In 1509 he was excommunicated by Pope Julius II, and he overcame the pontifical army in 1512 defending Ravenna. (Gaston de Foix fell in this battle, as an ally of Alfonso.)
https://en.wikipedia.org/wiki/Louis_XII Louis XII (27 June 1462 – 1 January 1515) was King of France from 1498 to 1515 and King of Naples from 1501 to 1504. The son of Charles, Duke of Orléans, and Maria of Cleves, he succeeded his 2nd cousin once removed and brother in law at the time, Charles VIII, who died without direct heirs in 1498.
https://en.wikipedia.org/wiki/Louis_XII Louis opened negotiations with the Duchy of Savoy and by May 1499 had hammered out an agreement that allowed French troops to cross Savoy to reach the Duchy of Milan.
Note! Lodovico Sforza is not Francesco Sforza. https://en.wikipedia.org/wiki/Ludovico_Sforza Ludovico Maria Sforza (Italian: [ludoˈviːko maˈriːa ˈsfɔrtsa]; 27 July 1452 – 27 May 1508), also known as Ludovico il Moro (Italian: [il ˈmɔːro]; “the Moor”).[b] “Arbiter of Italy”, according to the expression used by Guicciardini,[3] was an Italian Renaissance nobleman who ruled as Duke of Milan from 1494 to 1499. His ascendancy followed the death of his nephew Gian Galeazzo Sforza. A member of the Sforza family, he was the fourth son of Francesco I Sforza. A patron of the arts during the Milanese Renaissance, he commissioned the fresco of The Last Supper by Leonardo da Vinci. He also played a central role in the Italian Wars.
https://en.wikipedia.org/wiki/Louis_XII Meanwhile, Ludovico Sforza had been gathering an army, mainly among the Swiss, to take Milan back. In mid-January 1500, his army crossed the border into the Duchy of Milan and marched toward the city of Milan.[38] Upon hearing the news of Sforza’s return, some of his partisans in the city rose up. On 1 February 1500, Marshal Trivulzio decided that he could not hold the city, and the French retreated to the fortresses west of the city. Sforza was welcomed back into the city by a joyous crowd of his supporters on 5 February 1500.[39]
Same war, very shortly after.
Same war, go read some books, not worth detailing here.
https://en.wikipedia.org/wiki/Aetolian_League
However, during the Hellenistic period, they emerged as a dominant state in central Greece and expanded by the voluntary annexation of several Greek city-states to the League. Still, the Aetolian League had to fight against Macedonia and were driven to an alliance with Rome, which resulted in the final conquest of Greece by the Romans.
https://en.wikipedia.org/wiki/Macedonian_Wars
Macedonian Wars (214–148 BC) were a series of conflicts fought by the Roman Republic and its Greek allies in the eastern Mediterranean against several different major Greek kingdoms. They resulted in Roman control or influence over Greece and the rest of the eastern Mediterranean basin, in addition to their hegemony in the western Mediterranean after the Punic Wars. Traditionally, the “Macedonian Wars” include the four wars with Macedonia, in addition to one war with the Seleucid Empire, and a final minor war with the Achaean League (which is often considered to be the final stage of the final Macedonian war). The most significant war was fought with the Seleucid Empire, while the war with Macedonia was the second, and both of these wars effectively marked the end of these empires as major world powers, even though neither of them led immediately to overt Roman domination.[1] Four separate wars were fought against the weaker power, Macedonia, due to its geographic proximity to Rome, though the last two of these wars were against haphazard insurrections rather than powerful armies.[2] Roman influence gradually dissolved Macedonian independence and digested it into what was becoming a leading empire.
https://en.wikipedia.org/wiki/Antiochus_III_the_Great ntiochus III the Great ( ænˈtaɪəkəs; Greek: Ἀντίoχoς Μέγας Antiochos Megas; c. 241 – 3 July 187 BC)[1] was a Greek Hellenistic king and the 6th ruler of the Seleucid Empire, reigning from 222 to 187 BC.[2][3][4] He ruled over the region of Syria and large parts of the rest of western Asia towards the end of the 3rd century BC. Rising to the throne at the age of eighteen in 222 BC, his early campaigns against the Ptolemaic Kingdom were unsuccessful, but in the following years Antiochus gained several military victories and substantially expanded the empire’s territory. His traditional designation, the Great, reflects an epithet he assumed. He also assumed the title Basileus Megas (Greek for “Great King”), the traditional title of the Persian kings. A militarily active ruler, Antiochus restored much of the territory of the Seleucid Empire, before suffering a serious setback, towards the end of his reign, in his war against Rome.
Declaring himself the “champion of Greek freedom against Roman domination”, Antiochus III waged a four-year war against the Roman Republic beginning in mainland Greece in the autumn of 192 BC[5][6] before being decisively defeated at the Battle of Magnesia. He died three years later on campaign in the east.
https://en.wikipedia.org/wiki/Philip_V_of_Macedon
Philip V (Greek: Φίλιππος Philippos; 238–179 BC) was king (Basileus) of Macedonia from 221 to 179 BC. Philip’s reign was principally marked by an unsuccessful struggle with the emerging power of the Roman Republic. He would lead Macedon against Rome in the First and Second Macedonian Wars, losing the latter but allying with Rome in the Roman-Seleucid War towards the end of his reign.
Charles VIII, called the Affable (French: l’Affable; 30 June 1470 – 7 April 1498), was King of France from 1483 to his death in 1498. He succeeded his father Louis XI at the age of 13.[1]
To secure his rights to the Neapolitan throne that René of Anjou had left to his father, Charles made a series of concessions to neighbouring monarchs and conquered the Italian peninsula without much opposition. A coalition formed against the French invasion of 1494–98 attempted to stop Charles’ army at Fornovo, but failed and Charles marched his army back to France.
In an event that was to prove a watershed in Italian history,[16] Charles invaded Italy with 25,000 men (including 8,000 Swiss mercenaries) in September 1494 and marched across the peninsula virtually unopposed. He arrived in Pavia on 21 October 1494 and entered Pisa on 8 November 1494.[17] The French army subdued Florence in passing on their way south. Reaching Naples on 22 February 1495,[18] the French Army took Naples without a pitched battle or siege; Alfonso was expelled, and Charles was crowned King of Naples.
?
https://en.wikipedia.org/wiki/Republic_of_Genoa
Threatened by Alfonso V of Aragon, the Doge of Genoa in 1458 handed the Republic over to the French, making it the Duchy of Genoa under the control of John of Anjou, a French royal governor. However, with support from Milan, Genoa revolted and the Republic was restored in 1461. The Milanese then changed sides, conquering Genoa in 1464 and holding it as a fief of the French crown.[29][30][31] Between 1463–1478 and 1488–1499, Genoa was held by the Milanese House of Sforza.[28] From 1499 to 1528, the Republic reached its nadir, being under nearly continual French occupation. The Spanish, with their intramural allies, the “old nobility” entrenched in the mountain fastnesses behind Genoa, captured the city on May 30, 1522, and subjected the city to a pillage. When the admiral Andrea Doria of the powerful Doria family allied with the Emperor Charles V to oust the French and restore Genoa’s independence, a renewed prospect opened: 1528 marks the first loan from Genoese banks to Charles.[32]
The Republic of Florence, officially the Florentine Republic (Italian: Repubblica Fiorentina, pronounced [reˈpubblika fjorenˈtiːna], or Repubblica di Firenze), was a medieval and early modern state that was centered on the Italian city of Florence in Tuscany.[1][2] The republic originated in 1115, when the Florentine people rebelled against the Margraviate of Tuscany upon the death of Matilda of Tuscany, who controlled vast territories that included Florence. The Florentines formed a commune in her successors’ place.[3] The republic was ruled by a council known as the Signoria of Florence. The signoria was chosen by the gonfaloniere (titular ruler of the city), who was elected every two months by Florentine guild members.
During the Republic’s history, Florence was an important cultural, economic, political and artistic force in Europe. Its coin, the florin, became a world monetary standard.[4] During the Republican period, Florence was also the birthplace of the Renaissance, which is considered a fervent period of European cultural, artistic, political and economic “rebirth”.[5]
The republic had a checkered history of coups and countercoups against various factions. The Medici faction gained governance of the city in 1434 under Cosimo de’ Medici. The Medici kept control of Florence until 1494. Giovanni de’ Medici (later Pope Leo X) reconquered the republic in 1512.
Florence repudiated Medici authority for a second time in 1527, during the War of the League of Cognac. The Medici reassumed their rule in 1531 after an 11-month siege of the city, aided by Emperor Charles V.[6] Pope Clement VII, himself a Medici, appointed his relative Alessandro de’ Medici as the first “Duke of the Florentine Republic”, thereby transforming the Republic into a hereditary monarchy.[6][7]
The second Duke, Cosimo I, established a strong Florentine navy and expanded his territory, conquering Siena. In 1569, the pope declared Cosimo the first grand duke of Tuscany. The Medici ruled the Grand Duchy of Tuscany until 1737.
https://en.wikipedia.org/wiki/Bentivoglio_family
Bentivoglio (Latin: Bentivoius) was an Italian family that became the de facto rulers of Bologna and responsible for giving the city its political autonomy during the Renaissance, although their rule did not survive a century. 1401-1512
https://en.wikipedia.org/wiki/Caterina_Sforza Caterina Sforza (1463 – 28 May 1509) was an Italian noblewoman, the Countess of Forlì and Lady of Imola, firstly with her husband Girolamo Riario, and after his death as a regent of her son Ottaviano.
The descendant of a dynasty of noted condottieri, from an early age, Caterina distinguished herself through her bold and impetuous actions taken to safeguard her possessions from possible usurpers and to defend her dominions from attack, when they were involved in political intrigues.
https://en.wikipedia.org/wiki/Manfredi_family#Manfredi_family_members_who_were_Lords_of_Faenza The Manfredi were a noble family of northern Italy, who, with some interruptions, held the seigniory of the city of Faenza in Romagna from the beginning of the 14th century to the end of the 15th century. The family also held the seigniory of Imola for several decades at the same time.
https://en.wikipedia.org/wiki/Pesaro During the Renaissance it was ruled successively by the houses of Montefeltro (1285–1445), Sforza (1445–1512) and Della Rovere (1513–1631). Under the last family, who selected it as capital of their duchy, Pesaro saw its most flourishing age, with the construction of numerous public and private palaces, and the erection of a new line of walls (the Mura Roveresche). In 1475, a legendary wedding took place in Pesaro, when Costanzo Sforza and Camilla d’Aragona married.[5]
https://en.wikipedia.org/wiki/Rimini#Renaissance_and_Enlightenment
Capital of Romagna. Wiki does not really include that much information about Rimini during the Italian Wars.
Town in central Italy.
https://en.wikipedia.org/wiki/Camerino
In 1382, his descendant Giovanni Da Varano built a 12-kilometre (7.5 mi) long wall to defend the city, while a sumptuous Ducal Palace was built by Giulio Cesare in 1460. Giulio Cesare’s daughter, Camilla Battista da Varano, was canonized a saint by Pope Benedict XVI in 2010. In 1336 the University was founded. The Da Varano were nearly extinguished by Cesare Borgia in 1502, and in 1545 the city fell under direct Papal administration.
Town in Toscana. https://en.wikipedia.org/wiki/Piombino#The_Battle_of_Piombino The Castle of Piombino remained a Pisan possession until Gerardo Appiani, ceding Pisa to the Milanese Visconti, carved out the independent state of the Principality of Piombino that included the islands of the Tuscan Archipelago: Elba, Pianosa, Montecristo, Capraia, Gorgona, and Giglio, for his family who held it until 1634. In 1445, through his marriage with Caterina Appiani, Rinaldo Orsini acquired the lordship. In 1501–1503 the principality was under Cesare Borgia. In 1509 the Appiani became princes of the Holy Roman Empire with the title of Piombino.
Town in Toscana.
Okay, I do not remember exactly, what was with them.
Father of 2.42 He followed 2.78 He preceded 2.59
Alexander is considered one of the most controversial of the Renaissance popes, partly because he acknowledged fathering several children by his mistresses. As a result, his Italianized Valencian surname, Borgia, became a byword for libertinism and nepotism, which are traditionally considered as characterizing his pontificate. On the other hand, two of Alexander’s successors, Sixtus V and Urban VIII, described him as one of the most outstanding popes since Saint Peter.[5]
https://en.wikipedia.org/wiki/Romagna Romagna (Romagnol: Rumâgna) is an Italian historical region that approximately corresponds to the south-eastern portion of present-day Emilia-Romagna, North Italy. Traditionally, it is limited by the Apennines to the south-west, the Adriatic to the east, and the rivers Reno and Sillaro to the north and west. The region’s major cities include Cesena, Faenza, Forlì, Imola, Ravenna, Rimini and City of San Marino (San Marino is a landlocked state inside the Romagna historical region).
https://en.wikipedia.org/wiki/Tuscany#Renaissance Tuscany, especially Florence, is regarded as the birthplace of the Renaissance. Though “Tuscany” remained a linguistic, cultural, and geographic conception rather than a political reality, in the 15th century, Florence extended its dominion in Tuscany through the annexation of Arezzo in 1384, the purchase of Pisa in 1405, and the suppression of a local resistance there (1406). Livorno was bought in 1421 and became the harbour of Florence.
From the leading city of Florence, the republic was from 1434 onward dominated by the increasingly monarchical Medici family. Initially, under Cosimo, Piero the Gouty, Lorenzo and Piero the Unfortunate, the forms of the republic were retained and the Medici ruled without a title, usually without even a formal office. These rulers presided over the Florentine Renaissance. There was a return to the republic from 1494 to 1512, when first Girolamo Savonarola then Piero Soderini oversaw the state. Cardinal Giovanni di Lorenzo de’ Medici retook the city with Spanish forces in 1512, before going to Rome to become Pope Leo X. Florence was dominated by a series of papal proxies until 1527 when the citizens declared the republic again, only to have it taken from them again in 1530 after a siege by an Imperial and Spanish army. At this point Pope Clement VII and Charles V appointed Alessandro de’ Medici as the first formal hereditary ruler.
https://en.wikipedia.org/wiki/Cesare_Borgia#Later_years_and_death Duke of Valentinois (French: Duc de Valentinois; Italian: Duca Valentino) is a title of nobility, originally in the French peerage. It is currently one of the many hereditary titles claimed by the Prince of Monaco despite its extinction in French law in 1949. Though it originally indicated administrative control of the Duchy of Valentinois, based around the city of Valence, the duchy has since become part of France, making the title simply one of courtesy.
It has been created at least four times: on August 17, 1498, for Cesare Borgia, in 1548 for Diane of Poitiers, in 1642 for Prince Honoré II of Monaco, and most recently in 1715 for Prince Jacques I of Monaco.
Always Ghibelline, Pisa tried to build up its power in the course of the 14th century, and even managed to defeat Florence in the Battle of Montecatini (1315), under the command of Uguccione della Faggiuola. Eventually, however, after a long siege, Pisa was occupied by Florentines in 1405.[9] Florentines corrupted the capitano del popolo (“people’s chieftain”), Giovanni Gambacorta, who at night opened the city gate of San Marco. Pisa was never conquered by an army. In 1409, Pisa was the seat of a council trying to set the question of the Great Schism. In the 15th century, access to the sea became more difficult, as the port was silting up and was cut off from the sea. When in 1494, Charles VIII of France invaded the Italian states to claim the Kingdom of Naples,[9] Pisa reclaimed its independence as the Second Pisan Republic.
This Alba is not the “British Albion”! It is the village of Alba Longa in Lazio.
https://en.wikipedia.org/wiki/Medes Predecessors of Persians.
https://en.wikipedia.org/wiki/Hiero_II_of_Syracuse Hiero II (Greek: Ἱέρων Β΄; c. 308 BC – 215 BC) was the Greek tyrant of Syracuse from 275 to 215 BC, and the illegitimate son of a Syracusan noble, Hierocles, who claimed descent from Gelon. He was a former general of Pyrrhus of Epirus and an important figure of the First Punic War.[1] He figures in the story of famed thinker Archimedes shouting “Eureka”.
https://en.wikipedia.org/wiki/Cesare_Borgia Cesare Borgia (Italian pronunciation: [ˈtʃeːzare ˈbɔrdʒa, ˈtʃɛː-]; Valencian: Cèsar Borja [ˈsɛzaɾ ˈbɔɾdʒa]; Spanish: César Borja [ˈθesaɾ ˈβoɾxa]; 13 September 1475 – 12 March 1507) was an Italian[3][4] ex-cardinal[5] and condottiero (mercenary leader)[6][7] of Aragonese (Spanish) origin,[8] whose fight for power was a major inspiration for The Prince by Niccolò Machiavelli. He was an illegitimate son of Pope Alexander VI and member of the Spanish-Aragonese House of Borgia.[9]
After initially entering the Church and becoming a cardinal on his father’s election to the Papacy, he became, after the death of his brother in 1498, the first person to resign a cardinalate. He served as a condottiero for King Louis XII of France around 1500, and occupied Milan and Naples during the Italian Wars. At the same time he carved out a state for himself in Central Italy, but after his father’s death he was unable to retain power for long.
One for the two factions of Roman Barons, fought the 2.43
The House of Colonna, also known as Sciarrillo or Sciarra, is an Italian noble family, forming part of the papal nobility. It was powerful in medieval and Renaissance Rome, supplying one Pope (Martin V) and many other church and political leaders. The family is notable for its bitter feud with the Orsini family over influence in Rome, until it was stopped by Papal Bull in 1511. In 1571, the heads of both families married nieces of Pope Sixtus V. Thereafter, historians recorded that “no peace had been concluded between the princes of Christendom, in which they had not been included by name”.[4] https://en.wikipedia.org/wiki/Colonna_family
Urbino (UK: ɜːrˈbiːnoʊ ur-BEE-noh;[3] Italian: [urˈbiːno] (listen); Romagnol: Urbìn) is a walled city in the Marche region of Italy, south-west of Pesaro, a World Heritage Site notable for a remarkable historical legacy of independent Renaissance culture, especially under the patronage of Federico da Montefeltro, duke of Urbino from 1444 to 1482.
Cesare Borgia dispossessed Guidobaldo da Montefeltro, Duke of Urbino, and Elisabetta Gonzaga in 1502, with the complicity of his father, Pope Alexander VI. After the attempt of Pope Leo X to appoint a young Medici as duke, thwarted by the early death of Lorenzo II de Medici in 1519, Urbino was part of the Papal States, under the dynasty of the dukes Della Rovere (1508–1631). They moved the court to the city of Pesaro in 1523 and Urbino began a slow decline that would continue until the last decades of the seventeenth century.[5]
Magione (Italian pronunciation: [maˈdʒoːne]) is a comune (municipality) in the Province of Perugia in the Italian region Umbria, located about 15 km west of Perugia.
Haven’ found.
Haven’t found.
https://en.wikipedia.org/wiki/Paolo_Orsini_(condottiero,_born_1450)
Paolo Orsini (1450 – 18 January 1503) was an Italian condottiero in the service of the Papal States, Ferdinand of Aragon and the Republic of Florence.
He commanded the papal guards in 1485 when he and his cousin Virginio tried to take over Rome, but Paolo had all his goods confiscated as a result in 1496. He entered pope Alexander VI’s service in 1497 and served alongside Cesare Borgia in the latter’s attempt to conquer Bologna.[2]
He supported il Valentino in aiding the Duchy of Urbino who wished to return to ruling their state despite the Borgias’ refusal to allow this. After capturing Senigallia the Borgia used deception to arrest the four noblemen it wished to eliminate for taking part in the Magione conspiracy, with Vitellozzo Vitelli and Oliverotto da Fermo killed on 31 December 1502 by the assassin Michelotto Corella. Paolo and his cousin Francesco (fourth duke of Gravina and son of Raimondo Orsini) were both handed over at Città della Pieve, where they were strangled on 18 January 1503.[3]
Senigallia (or Sinigaglia in Old Italian, Romagnol: S’nigaja) is a comune and port town on Italy’s Adriatic coast. It is situated in the province of Ancona in the Marche region and lies approximately 30 kilometers north-west of the provincial capital city Ancona. Senigallia’s small port is located at the mouth of the river Misa. It is one of the endpoints of the Massa-Senigallia Line, one of the most important dividing lines (isoglosses) in the classification of the Romance languages.
Note that Sinigaglia is not Senegal.
https://en.wikipedia.org/wiki/Ramiro_de_Lorca
On 26 December 1502, Ramiro was executed in the main plaza of Cesena, his body cut in two and his head stuck on a pike. Niccolò Machiavelli wrote in The Prince that Ramiro’s bloody actions were what prompted Cesare to execute him and distance himself from his crimes.
https://en.wikipedia.org/wiki/Cesena Cesena (Italian pronunciation: [tʃeˈzɛːna]; Romagnol: Cisêna) is a city and comune in the Emilia-Romagna region of Italy, served by Autostrada A14, and located near the Apennine Mountains, about 15 kilometres (9 miles) from the Adriatic Sea.
After Novello’s death (1465), Cesena returned to the Papal States, but was again seized by a local seignor, Cesare Borgia, in 1500. The city was elevated to capital of his powerful though short-lived duchy.
Which year?
https://en.wikipedia.org/wiki/Italian_Wars Combined with the ambition of Ludovico Sforza, its collapse allowed Charles VIII of France to invade Naples in 1494, which drew in Spain and the Holy Roman Empire.
https://en.wikipedia.org/wiki/Gaeta#Middle_Ages In 1495, king Charles VIII of France conquered the city and sacked it. The following year, however, Frederick I of Aragon regained it with a tremendous siege which lasted from 8 September to 18 November.
In 1501 Gaeta was retaken by the French; however, after their defeat at the Garigliano (3 January 1504), they abandoned it to Gonzalo Fernández de Córdoba, Ferdinand the Catholic’s general.
In 1528 Andrea Doria, admiral of Charles V, defeated a French fleet in the waters off Gaeta and gave the city to its emperor. Gaeta was thenceforth protected with a new and more extensive wall, which also encompassed Monte Orlando.
https://en.wikipedia.org/wiki/Perugia#History Perugia ( pəˈruːdʒə,[3][4] US also -dʒiə, peɪˈ-,[5] Italian: [peˈruːdʒa] (listen); Latin: Perusia) is the capital city of Umbria in central Italy, crossed by the River Tiber, and of the province of Perugia. The city is located about 164 km (102 mi) north of Rome and 148 km (92 mi) southeast of Florence. It covers a high hilltop and part of the valleys around the area. The region of Umbria is bordered by Tuscany, Lazio, and Marche.
After the assassination in 1398 of Biordo Michelotti, who had made himself lord of Perugia, the city became a pawn in the Italian Wars, passing to Gian Galeazzo Visconti (1400), to Pope Boniface IX (1403), and to Ladislaus of Naples (1408–14), before it settled into a period of sound governance under the Signoria of the condottiero Braccio da Montone (1416–24), who reached a concordance with the papacy. Following mutual atrocities of the Oddi and the Baglioni families, power was at last concentrated in the Baglioni, who though they had no legal position, defied all other authority, though their bloody internal squabbles culminated in a massacre, 14 July 1500.[24] Gian Paolo Baglioni was lured to Rome in 1520 and beheaded by Leo X; and in 1540, Rodolfo, who had slain a papal legate, was defeated by Pier Luigi Farnese, and the city, captured and plundered by his soldiery, was deprived of its privileges.
https://en.wikipedia.org/wiki/Baglioni_family
The House of Baglioni is an Umbrian noble family that ruled over the city of Perugia between 1438 and 1540, when Rodolfo II Baglioni had to surrender the city to the papal troops of Pope Paul III after the Salt War.[1] At that point, Perugia came under the control of the Papal States.[2]
https://en.wikipedia.org/wiki/Vitelli_family The House of Vitelli, among other families so named, were a prominent noble family of Umbria, rulers of Città di Castello and lesser rocche.
Vitellozzo Vitelli, brother of 2.71
https://en.wikipedia.org/wiki/Pope_Julius_II Pope Julius II (Latin: Iulius II; Italian: Giulio II; born Giuliano della Rovere; 5 December 1443 – 21 February 1513) was head of the Catholic Church and ruler of the Papal States from 1503 to his death in 1513. Nicknamed the Warrior Pope or the Fearsome Pope, he chose his papal name not in honour of Pope Julius I but in emulation of Julius Caesar.[1] One of the most powerful and influential popes, Julius II was a central figure of the High Renaissance and left a significant cultural and political legacy.[2]
In 1506, Julius II established the Vatican Museums and initiated the rebuilding of the St. Peter’s Basilica. The same year he organized the famous Swiss Guards for his personal protection and commanded a successful campaign in Romagna against local lords. The interests of Julius II lay also in the New World, as he ratified the Treaty of Tordesillas, establishing the first bishoprics in the Americas and beginning the catholicization of Latin America. In 1508, he commissioned the Raphael Rooms and Michelangelo’s paintings in the Sistine Chapel.
Julius II was described by Machiavelli in his works as an ideal prince. Pope Julius II allowed people seeking indulgences to donate money to the Church which would be used for the construction of Saint Peter’s Basilica.[3] In his Julius Excluded from Heaven, the scholar Erasmus of Rotterdam described a Pope Julius II in the afterlife planning to storm Heaven when he is denied entry.[4]
https://en.wikipedia.org/wiki/St._Peter_ad_Vincula Saint Peter ad Vincula (Saint Peter in Chains) alludes to the Bible story of the Liberation of Saint Peter, when the Apostle Peter, imprisoned by King Herod Agrippa, was rescued by an angel.
Perhaps, was also someone’s surname, but I haven’t found it.
https://en.wikipedia.org/wiki/Colonna,_Lazio
In 1298 Pope Boniface VIII ordered the destruction of Colonna and its castle as punishment against the Colonna family. With the advent of Pope Clement V (1305) the Colonna family resumed the fief with all of its territories.
Who is that? https://en.wikipedia.org/wiki/Giovanni_Antonio_Sangiorgio ?
https://en.wikipedia.org/wiki/Jacques_d%27Amboise_(bishop) ? But he was not a cardinal?
Apparently, Cardinals from Spain, who elected Pope Julius II.
https://en.wikipedia.org/wiki/Agathocles_of_Syracuse Agathocles (Greek: Ἀγαθοκλῆς, Agathoklḗs; 361–289 BC) was a Greek tyrant of Syracuse (317–289 BC) and self-styled king of Sicily (304–289 BC).
Seemingly, not much has happened in Syracuse during the Italian Wars.
Apparently, https://en.wikipedia.org/wiki/Hamilcar
Hamilcar, son of Gisgo and grandson of Hanno the Great, led a campaign against Agathocles of Syracuse during the Third Sicilian War. He defeated Agathocles in the Battle of the Himera River in 311 BC. He was captured during the Siege of Syracuse and then killed in 309 BC.
https://en.wikipedia.org/wiki/Oliverotto_Euffreducci
Oliverotto Euffreducci, known as Oliverotto of Fermo (1475, Fermo – 31 December 1502, Senigallia), was an Italian condottiero and lord of Fermo during the pontificate of Alexander VI.
https://en.wikipedia.org/wiki/Paolo_Vitelli_(condottiero)
Paolo Vitelli (1461 – 1 October 1499) was an Italian knight and condottiero as well as lord of Montone. He was born in Città di Castello, which had been captured by his father, Niccolò Vitelli. He was the brother of Vitellozo and Chiappino, both condottieri.[1] He worked as a mercenary for the republic of Florence, where he was later suspected of treachery and executed. This led his brother Vitellozzo to repeatedly assail Tuscan properties.[2]
https://en.wikipedia.org/wiki/Vitellozzo_Vitelli
Vitellozzo Vitelli (c. 1458 – December 31, 1502) was an Italian condottiero. He was lord of Montone, Città di Castello, Monterchi and Anghiari.
Friend of Leonardo Da Vinci.
https://en.wikipedia.org/wiki/Nabis
Nabis (Greek: Νάβις) was the last king of independent Sparta.[2] He was probably a member of the Heracleidae,[3] and he ruled from 207 BC to 192 BC, during the years of the First and Second Macedonian Wars and the eponymous “War against Nabis”, i.e. against him. After taking the throne by executing two claimants, he began rebuilding Sparta’s power.[2] During the Second Macedonian War, Nabis sided with King Philip V of Macedon and in return he received the city of Argos. However, when the war began to turn against the Macedonians, he defected to Rome. After the war, the Romans, urged by the Achaean League, attacked Nabis and defeated him. He then was assassinated in 192 BC by the Aetolian League. He represented the last phase of Sparta’s reformist period.[4]
https://en.wikipedia.org/wiki/Gracchi_brothers
The Gracchi brothers were two Roman brothers, sons of Tiberius Sempronius Gracchus who was consul in 177 BC. Tiberius, the elder brother, was tribune of the plebs in 133 BC and Gaius, the younger brother, was tribune a decade later in 123–122 BC.[1]
They attempted to redistribute the ager publicus – the public land hitherto controlled principally by aristocrats – to the urban poor[dubious – discuss] and military veterans, in addition to other social and constitutional reforms. After achieving some early success, both were assassinated by the Optimates, the conservative faction in the Senate that opposed these reforms.[dubious – discuss]
Have not found detail.
I think, here he means all the powerful and important people who are not formally princes. Like Condottieri.
https://en.wikipedia.org/wiki/Pope_Sixtus_IV Pope Sixtus IV (Italian: Sisto IV: 21 July 1414 – 12 August 1484), born Francesco della Rovere, was head of the Catholic Church and ruler of the Papal States from 9 August 1471 to his death. His accomplishments as pope included the construction of the Sistine Chapel and the creation of the Vatican Archives. A patron of the arts, he brought together the group of artists who ushered the Early Renaissance into Rome with the first masterpieces of the city’s new artistic age.
Sixtus founded the Spanish Inquisition through the bull Exigit sincerae devotionis affectus (1478), and he annulled the decrees of the Council of Constance. He was noted for his nepotism and was personally involved in the infamous Pazzi conspiracy.[1]
He was followed by 2.32
https://en.wikipedia.org/wiki/Pope_Leo_X Pope Leo X (Italian: Leone X; born Giovanni di Lorenzo de’ Medici, 11 December 1475 – 1 December 1521) was head of the Catholic Church and ruler of the Papal States from 9 March 1513 to his death in 1521.[1]
Born into the prominent political and banking Medici family of Florence, Giovanni was the second son of Lorenzo de’ Medici, ruler of the Florentine Republic, and was elevated to the cardinalate in 1489. Following the death of Pope Julius II, Giovanni was elected pope after securing the backing of the younger members of the Sacred College. Early on in his rule he oversaw the closing sessions of the Fifth Council of the Lateran, but struggled to implement the reforms agreed. In 1517 he led a costly war that succeeded in securing his nephew as Duke of Urbino, but reduced papal finances.
https://en.wikipedia.org/wiki/Epaminondas
Epaminondas ( ɪˌpæmɪˈnɒndəs; Greek: Ἐπαμεινώνδας; 419/411–362 BC) was a Greek general of Thebes and statesman of the 4th century BC who transformed the Ancient Greek city-state, leading it out of Spartan subjugation into a pre-eminent position in Greek politics called the Theban Hegemony. In the process, he broke Spartan military power with his victory at Leuctra and liberated the Messenian helots, a group of Peloponnesian Greeks who had been enslaved under Spartan rule for some 230 years after being defeated in the Messenian War ending in 600 BC.
https://en.wikipedia.org/wiki/Caravaggio,_Lombardy#History
The battle description I have not found.
https://en.wikipedia.org/wiki/Joanna_of_Naples_(1478%E2%80%931518)
Joanna of Naples (15 April 1478 – 27 August 1518) was Queen of Naples by marriage to her nephew, Ferdinand II of Naples. After the death of her spouse, she was for a short while a candidate for the throne.
https://en.wikipedia.org/wiki/John_Hawkwood
Sir John Hawkwood (c. 1323 – 17 March 1394) was an English soldier who served as a mercenary leader or condottiero in Italy. As his name was difficult to pronounce for non-English-speaking contemporaries, there are many variations of it in the historical record. He often referred to himself as Haukevvod and in Italy he was known as Giovanni Acuto, literally meaning “John Sharp” (or “John the Astute”) in reference to his “cleverness or cunning”.[1] His name was Latinised as Johannes Acutus (“John Sharp”).[2] Other recorded forms are Aucgunctur, Haughd, Hauvod, Hankelvode, Augudh, Auchevud, Haukwode and Haucod.[3] His exploits made him a man shrouded in myth in both England and Italy. Much of his enduring fame results from the surviving large and prominent fresco portrait of him in the Duomo, Florence, made in 1436 by Paolo Uccello, seen every year by 4½ million[4] tourists.
https://en.wikipedia.org/wiki/Milanese_War_of_Succession
Wealthy Italian family, famous for fighting in the War of Milanese Succcession.
https://en.wikipedia.org/wiki/Carmagnola
Carmagnola (Italian: [karmaɲˈɲɔːla]; Piedmontese: Carmagnòla [karmaˈɲɔla] (listen)) is a comune (municipality) in the Metropolitan City of Turin in the Italian region Piedmont, located 29 kilometres (18 mi) south of Turin.[3] The town is on the right side of the Po river.
https://en.wikipedia.org/wiki/Alberico_da_Barbiano
Alberico da Barbiano (c. 1344–1409) was the first of the Italian condottieri. His master in military matters was the English mercenary John Hawkwood, known in Italy as Giovanni Acuto. Alberico’s compagnia fought under the banner of Saint George, as the compagnia San Giorgio.[1]
https://en.wikipedia.org/wiki/Ferdinand_II_of_Aragon
Ferdinand II (Aragonese: Ferrando; Catalan: Ferran; Basque: Errando; Italian: Ferdinando; Latin: Ferdinandus; Spanish: Fernando; 10 March 1452 – 23 January 1516), also called Ferdinand the Catholic (Spanish: el Católico), was King of Aragon and Sardinia from 1479, King of Sicily from 1468, King of Naples (as Ferdinand III) from 1504 and King of Navarre (as Ferdinand I) from 1512 until his death in 1516. He was also the Duke (nominal) of the ancient Duchies of Athens and Neopatria. He was King of Castile and León (as Ferdinand V) from 1475 to 1504, alongside his wife Queen Isabella I. From 1506 to 1516, he was the Regent of the Crown of Castile, making him the effective ruler of Castile. From 1511 to 1516, he styled himself as Imperator totius Africa (Emperor of All Africa) after having conquered Tlemcen and making the Zayyanid Sultan, Abu Abdallah V, his vassal.[1] He was also the Grandmaster of the Spanish Military Orders of Santiago (1499-1516), Calatrava (1487-1516), Alcantara (1492-1516) and Montesa (1499-1516), after he permanently annexed them into the Spanish Crown. He reigned jointly with Isabella over a dynastically unified Spain; together they are known as the Catholic Monarchs. Ferdinand is considered the de facto first King of Spain, and was described as such during his reign (Latin: Rex Hispaniarum; Spanish: Rey de España).
https://en.wikipedia.org/wiki/Constantine_XI_Palaiologos
Constantine XI Dragases Palaiologos or Dragaš Palaeologus (Greek: Κωνσταντῖνος Δραγάσης Παλαιολόγος, Kōnstantînos Dragásēs Palaiológos; 8 February 1405 – 29 May 1453) was the last Roman emperor, reigning from 1449 until his death in battle at the Fall of Constantinople in 1453. Constantine’s death marked the end of the Byzantine Empire, which traced its origin to Constantine the Great’s foundation of Constantinople as the Roman Empire’s new capital in 330. Given that the Byzantine Empire was the Roman Empire’s medieval continuation, with its citizens continuing to refer to themselves as Romans, Constantine XI’s death and Constantinople’s fall also marked the definitive end of the Roman Empire, founded by Augustus almost 1,500 years earlier.
https://en.wikipedia.org/wiki/Imola
Imola (Italian: [ˈiːmola]; Romagnol: Jômla or Jemula) is a city and comune in the Metropolitan City of Bologna, located on the river Santerno, in the Emilia-Romagna region of northern Italy. The city is traditionally considered the western entrance to the historical region Romagna.
Pope Benedict XII turned the city and its territory over to Lippo II Alidosi with the title of pontifical vicar, the power remaining in the family Alidosi until 1424, when the condottiero Angelo della Pergola, “capitano” for Filippo Maria Visconti, gained the supremacy (see also Wars in Lombardy). In 1426 the city was restored to the Holy See, and the legate (later Cardinal) Capranica inaugurated a new regime in public affairs.
Various condottieri later ruled in the city, such as the Visconti; several landmark fortresses remain from this period. In 1434, 1438, and 1470, Imola was conferred on the Sforza, who had become dukes of Milan (Lombardy). It was again brought under papal authority when it was bestowed as dowry on Caterina Sforza, the bride of Girolamo Riario, nephew of Pope Sixtus IV. Riario was invested with the Principality of Forlì and Imola. This proved advantageous to Imola, which was embellished with beautiful palaces and works of art (e.g. in the cathedral, the tomb of Girolamo, murdered in 1488 by conspirators of Forli). The rule of the Riarii, however, was brief, as Pope Alexander VI deprived the son of Girolamo, Ottaviano, of power, and on 25 November 1499, the city surrendered to Cesare Borgia. After his death, two factions, that of Galeazzo Riario and that of the Church, competed for control of the city. The ecclesiastical party was victorious, and in 1504 Imola submitted to Pope Julius II. The last trace of these contests was a bitter enmity between the Vaini and Sassatelli families.
https://en.wikipedia.org/wiki/Forl%C3%AC
Forlì ( fɔːrˈliː for-LEE, Italian: [forˈli] (listen); Romagnol: Furlè [furˈlɛ]; Latin: Forum Livii) is a comune (municipality) and city in Emilia-Romagna, Northern Italy, and is the capital of the province of Forlì-Cesena. It is the central city of Romagna.
Local factions with papal support ousted the family in 1327–29 and again in 1359–75, and at other turns of events the bishops were expelled by the Ordelaffi. Until the Renaissance the Ordelaffi strived to maintain the possession of the city and its countryside, especially against Papal attempts to assert back their authority. Often civil wars between members of the family occurred. They also fought as condottieri for other states to earn themselves money to protect or embellish Forlì.
https://en.wikipedia.org/wiki/Philopoemen
Philopoemen ˌfɪləˈpiːmən (Greek: Φιλοποίμην Philopoímēn; 253 BC, Megalopolis – 183 BC, Messene) was a skilled Greek general and statesman, who was Achaean strategos on eight occasions.
From the time he was appointed as strategos in 209 BC, Philopoemen helped turn the Achaean League into an important military power in Greece. He was called “the last of the Greeks” by an anonymous Roman.
https://en.wikipedia.org/wiki/Xenophon
Xenophon of Athens ( ˈzɛnəfən, zi-, -fɒn; Ancient Greek: Ξενοφῶν [ksenopʰɔ̂ːn]; c. 430[1] – probably 355 or 354 BC[4]) was a Greek military leader, philosopher, and historian, born in Athens. At the age of 30, Xenophon was elected commander of one of the biggest Greek mercenary armies of the Achaemenid Empire, the Ten Thousand, that marched on and came close to capturing Babylon in 401 BC.
Today, Xenophon is best known for his historical works. The Hellenica continues directly from the final sentence of Thucydides’ History of the Peloponnesian War covering the last seven years of the Peloponnesian War (431–404 BC) and the subsequent forty-two years (404 BC–362 BC) ending with the Second Battle of Mantinea.
https://en.wikipedia.org/wiki/Pistoia
Pistoia (US: pɪˈstɔɪə,piːˈstoʊjɑː,[3][4] Italian: [pisˈtoːja] (listen)[5] is a city and comune in the Italian region of Tuscany, the capital of a province of the same name, located about 30 kilometres (19 mi) west and north of Florence and is crossed by the Ombrone Pistoiese, a tributary of the River Arno.
In 1254 the Ghibelline town of Pistoia was conquered by the Guelph Florence; this did not pacify the town, but led to marked civil violence between “Black” and “White” Guelph factions, pitting different noble families against one another. In the Inferno of Dante, we encounter a particularly violent member of the Black faction of Pistoia, Vanni Fucci, tangled up in a knot of snakes while cursing God, who states: (I am a) beast and Pistoia my worthy lair. Pistoia remained a Florentine holding except for a brief period in the 14th century, when a former abbott, Ormanno Tedici, became Lord of the city. This did not last long, since his nephew Filippo sold the town to Castruccio Castracani of Lucca. The town was officially annexed to Florence in 1530.
https://en.wikipedia.org/wiki/Hannibal
Hannibal ( ˈhænɪbəl; Punic: 𐤇𐤍𐤁𐤏𐤋, Ḥannibaʿl; 247 – between 183 and 181 BC) was a Carthaginian general and statesman who commanded the forces of Carthage in their battle against the Roman Republic during the Second Punic War. He is widely regarded as one of the greatest military commanders in history.
https://en.wikipedia.org/wiki/Quintus_Fabius_Maximus_Verrucosus
Quintus Fabius Maximus Verrucosus, surnamed Cunctator (c. 280 – 203 BC), was a Roman statesman and general of the third century BC. He was consul five times (233, 228, 215, 214, and 209 BC) and was appointed dictator in 221 and 217 BC. He was censor in 230 BC. His agnomen, Cunctator, usually translated as “the delayer”, refers to the strategy that he employed against Hannibal’s forces during the Second Punic War. Facing an outstanding commander with superior numbers, he pursued a then-novel strategy of targeting the enemy’s supply lines, and accepting only smaller engagements on favourable ground, rather than risking his entire army on direct confrontation with Hannibal himself. As a result, he is regarded as the originator of many tactics used in guerrilla warfare.[1]
https://en.wikipedia.org/wiki/Locrians
The Locrians (Greek: Λοκροί, Locri) were an ancient Greek tribe that inhabited the region of Locris in Central Greece, around Parnassus. They spoke the Locrian dialect, a Doric-Northwest dialect, and were closely related to their neighbouring tribes, the Phocians and the Dorians. They were divided into two geographically distinct tribes, the western Ozolians and the eastern Opuntians; their primary towns were Amphissa and Opus respectively, and their most important colony was the city of Epizephyrian Locris in Magna Graecia, which still bears the name “Locri”. Among others, Ajax the Lesser and Patroclus were the most famous Locrian heroes, both distinguished in the Trojan War; Zaleucus from Epizephyrian Locris devised the first written Greek law code, the Locrian code.
https://en.wikipedia.org/wiki/Chiron
In Greek mythology, Chiron ( ˈkaɪrən KY-rən; also Cheiron or Kheiron; Ancient Greek: Χείρων, romanized: Kheírōn, lit. ’hand’)[1] was held to be the superlative centaur amongst his brethren since he was called the “wisest and justest of all the centaurs”.[2]
Presumably, taught:
Probably, the Second, not the First: https://en.wikipedia.org/wiki/Annibale_II_Bentivoglio
Annibale II Bentivoglio (1467[1] – June 1540) was an Italian condottiero, who was shortly lord of Bologna in 1511–1512. He was the last member of his family to hold power in the city. He was the son of Giovanni II Bentivoglio.
Bentivoglio Altarpiece by Lorenzo Costa, detail with the portrait of Annibale II Bentivoglio. In 1487 he married Lucrezia d’Este. He served Florence and fought against the French invasion of Charles VIII in 1494. In 1500, in a changing of side ordered by his father, he paid 50,000 ducats to Gian Giacomo Trivulzio, French plenipotentiary in Milan, to save his city from any attack.
In 1506 Giovanni II was ousted from Bologna. Annibale and his brother Ermes remained in the city in order to favour the family’s return, but in vain. In 1511, thanks to Trivulzio’s intercession, he managed to return as ruler. But he was able to maintain his position only until June 10, 1512, after the French defeat at Ravenna.
He took refuge in Ferrara, where he died in 1540. He is portrayed in Lorenzo Costa the Elder’s Bentivoglio Altarpiece, commissioned by his father in 1488.
Some competitors to the Bentivoglio family?
https://en.wikipedia.org/wiki/Marcus_Aurelius
Marcus Aurelius Antoninus ( ɔːˈriːliəs aw-REE-lee-əs;[2] 26 April 121 – 17 March 180) was Roman emperor from 161 to 180 and a Stoic philosopher. He was the last of the rulers known as the Five Good Emperors (a term coined some 13 centuries later by Niccolò Machiavelli), and the last emperor of the Pax Romana, an age of relative peace and stability for the Roman Empire lasting from 27 BC to 180 AD. He served as Roman consul in 140, 145, and 161.
https://en.wikipedia.org/wiki/Commodus
Commodus ( ˈkɒmədəs;[4] 31 August 161 – 31 December 192) was a Roman emperor who ruled from 176 to 192. He served jointly with his father Marcus Aurelius from 176 until the latter’s death in 180, and thereafter he reigned alone until his assassination. His reign is commonly thought of as marking the end of a golden period of peace in the history of the Roman Empire, known as the Pax Romana.
https://en.wikipedia.org/wiki/Pertinax
Publius Helvius Pertinax ( ˈpɜːrtɪnæks; 1 August 126 – 28 March 193) was Roman emperor for the first three months of 193. He succeeded Commodus to become the first emperor during the tumultuous Year of the Five Emperors.
https://en.wikipedia.org/wiki/Didius_Julianus
Marcus Didius Julianus ( ˈdɪdiəs; 29 January 133 or 137 – 2 June 193)[3] was Roman emperor for nine weeks from March to June 193, during the Year of the Five Emperors. Julianus had a promising political career, governing several provinces, including Dalmatia and Germania Inferior, and defeated the Chauci and Chatti, two invading Germanic tribes. He was even appointed to the consulship in 175 along with Pertinax as a reward, before being demoted by Commodus. After this demotion, his early, promising political career languished.
https://en.wikipedia.org/wiki/Septimius_Severus
Lucius Septimius Severus (Latin: [sɛˈweːrʊs]; 11 April 145 – 4 February 211) was Roman emperor from 193 to 211. He was born in Leptis Magna (present-day Al-Khums, Libya) in the Roman province of Africa. As a young man he advanced through the customary succession of offices under the reigns of Marcus Aurelius and Commodus. Severus seized power after the death of the emperor Pertinax in 193 during the Year of the Five Emperors.
https://en.wikipedia.org/wiki/Caracalla
Marcus Aurelius Antoninus “Caracalla” ( ˌkærəˈkælə;[2] born Lucius Septimius Bassianus, 4 April 188 – 8 April 217) was Roman emperor from 198 to 217. He was a member of the Severan dynasty, the elder son of Emperor Septimius Severus and Empress Julia Domna.
https://en.wikipedia.org/wiki/Macrinus
Marcus Opellius Macrinus ( məˈkraɪnəs; c. 165 – June 218) was Roman emperor from April 217 to June 218, reigning jointly with his young son Diadumenianus. As a member of the equestrian class, he became the first emperor who did not hail from the senatorial class and also the first emperor who never visited Rome during his reign.
https://en.wikipedia.org/wiki/Elagabalus
Marcus Aurelius Antoninus “Elagabalus” ( ˌɛləˈɡæbələs EL-ə-GAB-ə-ləs;[a] born Sextus Varius Avitus Bassianus, c. 204 – 11/12 March 222), was Roman emperor from 218 to 222, while he was still a teenager. His short reign was conspicuous for sex scandals and religious controversy.
https://en.wikipedia.org/wiki/Severus_Alexander
Marcus Aurelius Severus Alexander[1] (1 October 208 – 21/22 March 235) was a Roman emperor, who reigned from 222 until 235. He was the last emperor from the Severan dynasty. He succeeded his slain cousin Elagabalus in 222. Alexander himself was eventually assassinated, and his death marked the beginning of the events of the Third Century Crisis, which included nearly fifty years of civil war, foreign invasion, and the collapse of the monetary economy.
https://en.wikipedia.org/wiki/Maximinus_Thrax
Gaius Julius Verus Maximinus “Thrax” (“the Thracian”; c. 173 – 238) was Roman emperor from 235 to 238.
His father was an accountant in the governor’s office and sprang from ancestors who were Carpi (a Dacian tribe), a people whom Diocletian would eventually drive from their ancient abode (in Dacia) and transfer to Pannonia.[4] Maximinus was the commander of the Legio IV Italica when Severus Alexander was assassinated by his own troops in 235. The Pannonian army then elected Maximinus emperor.[5]
The geographical term Illyris (distinct from Illyria) was sometimes used to define approximately the area of northern and central Albania down to the Aoös valley (modern Vjosa), including in most periods much of the lakeland area.[6][7] In Roman times the terms Illyria / Illyris / Illyricum were extended from the territory that was roughly located in the area of the south-eastern Adriatic coast (modern Albania and Montenegro) and its hinterland, to a broader region stretching between the Adriatic Sea and the Danube, and from the upper reaches of the Adriatic down to the Ardiaei.[8][9][10]
From about mid-1st century BC the term Illyricum was used by the Romans for the province of the Empire that stretched along the eastern Adriatic coast north of the Drin river, south of which the Roman province of Macedonia began.[11]
https://en.wikipedia.org/wiki/Pescennius_Niger
Gaius Pescennius Niger (c. 135 – 194) was Roman Emperor from 193 to 194 during the Year of the Five Emperors. He claimed the imperial throne in response to the murder of Pertinax and the elevation of Didius Julianus, but was defeated by a rival claimant, Septimius Severus, and killed while attempting to flee from Antioch.
https://en.wikipedia.org/wiki/Clodius_Albinus
Decimus Clodius Albinus (c. 150 – 19 February 197) was a Roman imperial pretender between 193 and 197. He was proclaimed emperor by the legions in Britain and Hispania (the Iberian Peninsula, comprising modern Spain and Portugal) after the murder of Pertinax in 193 (known as the “Year of the Five Emperors”), and proclaimed himself emperor again in 196, before his final defeat and death the following year.[1]
Aquileia is a city-town at the north coast of Adriatic sea. Was important. During the siege, Maximinus was KIA.
I am not very sure, but seemingly, the Sultan of Egypt.
A consistent accession process occurred with every new Mamluk sultan.[151] It more or less involved the election of a sultan by a council of emirs and mamluks (who would give him an oath of loyalty), the sultan’s assumption of the monarchical title al-malik, a state-organized procession through Cairo at the head of which was the sultan, and the reading of the sultan’s name in the khutbah (Friday prayer sermon).[151] The process was not formalized and the electoral body was never defined, but typically consisted of the emirs and mamluks of whatever Mamluk faction held sway; usurpations of the throne by rival factions were relatively common.
https://en.wikipedia.org/wiki/Guelphs_and_Ghibellines#White_and_Black_Guelphs
Guelf, Broadly speaking, supported the Pope. Ghibelline, broadly speaking, supported the Holy Roman Emperor.
https://en.wikipedia.org/wiki/Battle_of_Agnadello
The Battle of Agnadello, also known as Vailà, was one of the most significant battles of the War of the League of Cambrai and one of the major battles of the Italian Wars.
Louis XII won over the Republic of Venice.
https://en.wikipedia.org/wiki/Pandolfo_Petrucci Pandolfo Petrucci (14 February 1452 – 21 May 1512) was a ruler of the Italian Republic of Siena during the Renaissance.
https://en.wikipedia.org/wiki/Niccol%C3%B2_Vitelli
Niccolò Vitelli (1414–1486) was an Italian condottiero of the Vitelli family from Città di Castello.
The son of Giovanni Vitelli and Maddalena dei Marchesi di Petriolo, he was orphaned and grew up under the tutelage of his uncle Vitellozzo who introduced him into the political life of the area. He was podestà in some of the major Italian cities, such as Florence, Siena, Genoa and Perugia.
https://en.wikipedia.org/wiki/Citt%C3%A0_di_Castello
Città di Castello (Italian pronunciation: [tʃitˈta ddi kasˈtɛllo]);[2] “Castle Town”) is a city and comune in the province of Perugia, in the northern part of Umbria.[3] It is situated on a slope of the Apennines, on the flood plain along the upper part of the river Tiber. The city is 56 km (35 mi) north of Perugia and 104 km (65 mi) south of Cesena on the motorway SS 3 bis.
Under Pope Martin V in 1420 it was taken by the condottiero Braccio da Montone. Later Niccolò Vitelli, aided by Florence and Milan, became absolute ruler or tiranno. Antonio da Sangallo the Younger built an extensive palace for the Vitelli family.
In 1474 Sixtus IV sent his nephew Cardinal Giuliano della Rovere, later Pope Julius II, to rule the town. After fruitless negotiations he laid siege to the city, but Vitelli did not surrender until he knew that the command of the army had been given to Duke Federico III da Montefeltro. The following year Vitelli tried unsuccessfully to recapture the city. Cesare Borgia through a conspiracy in Senigallia ordered Vitelli was strangled in the evening of 12/31/1502 and Città di Castello were added to the Papal possessions.
https://en.wikipedia.org/wiki/Guidobaldo_da_Montefeltro
Guidobaldo (Guido Ubaldo) da Montefeltro (25 January 1472 – 10 April 1508), also known as Guidobaldo I, was an Italian condottiero and the Duke of Urbino from 1482 to 1508.
He fought as one of Pope Alexander VI’s captains alongside the French troops of King Charles VIII of France during the latter’s invasion of southern Italy; later, he was hired by the Republic of Venice against Charles. In 1496, while fighting for the pope near Bracciano, Guidobaldo was taken prisoner by the Orsini and the Vitelli, being freed the following year.
Guidobaldo was forced to flee Urbino in 1502 to escape the armies of Cesare Borgia, but returned after the death of Cesare Borgia’s father, Pope Alexander VI, in 1503.
https://en.wikipedia.org/wiki/Girolamo_Riario
Girolamo Riario (1443 – 14 April 1488) was Lord of Imola (from 1473) and Forlì (from 1480). He served as Captain General of the Church under his uncle Pope Sixtus IV. He took part in the 1478 Pazzi conspiracy against the Medici, and was assassinated 10 years later by members of the Forlivese Orsi family.
Haven’t found much about them.
https://en.wikipedia.org/wiki/Emirate_of_Granada
The Emirate of Granada (Arabic: إمارة غرﻧﺎﻃﺔ, romanized: Imārat Ġarnāṭah), also known as the Nasrid Kingdom of Granada (Spanish: Reino Nazarí de Granada), was an Islamic realm in southern Iberia during the Late Middle Ages. It was the last independent Muslim state in Western Europe.[2]
Haven’t actually found who he was.
https://en.wikipedia.org/wiki/Antonio_de_Venafro
Da Venafro was born in 1459 in Venafro, Molise. He moved to Siena and attended the university there graduating in jurisprudence. In 1488 Venafro was elected professor of law at the University of Siena. In November 1493 Antonio was elected Appellate Judge. As such he was arrested by the avant-garde of Charles VIII and forced to follow them in their march to Rome. He was freed a few days later only by a direct order of the King himself. A trusted adviser and private secretary of the Lord of Siena, Pandolfo Petrucci, he was named by the latter counselor and prime minister. In the month of October 1502 Venafro represented Pandolfo Petrucci at the Diet of La Magione; and later he went to Imola with Paolo Orsini, where a peace agreement was signed between Cesare Borgia and the conspirators of La Magione represented by Paolo Orsini.
Haven’t found who he was.
https://en.wikipedia.org/wiki/Philip_V_of_Macedon
Philip V (Greek: Φίλιππος Philippos; 238–179 BC) was king (Basileus) of Macedonia from 221 to 179 BC. Philip’s reign was principally marked by an unsuccessful struggle with the emerging power of the Roman Republic. He would lead Macedon against Rome in the First and Second Macedonian Wars, losing the latter but allying with Rome in the Roman-Seleucid War towards the end of his reign.
https://en.wikipedia.org/wiki/Titus_Quinctius_Flamininus
Titus Quinctius Flamininus (c. 228 – 174 BC) was a Roman politician and general instrumental in the Roman conquest of Greece.[1]
https://en.wikipedia.org/wiki/Giovanni_II_Bentivoglio
Giovanni II Bentivoglio (12 February 1443 – 15 February 1508) was an Italian nobleman who ruled as tyrant of Bologna from 1463 until 1506. He had no formal position, but held power as the city’s “first citizen.” The Bentivoglio family ruled over Bologna from 1443, and repeatedly attempted to consolidate their hold of the Signoria of the city.
https://en.wikipedia.org/wiki/Kingdom_of_Naples
The Kingdom of Naples (Latin: Regnum Neapolitanum; Italian: Regno di Napoli; Neapolitan: Regno ’e Napule), also known as the Kingdom of Sicily, was a state that ruled the part of the Italian Peninsula south of the Papal States between 1282 and 1816. It was established by the War of the Sicilian Vespers (1282–1302), when the island of Sicily revolted and was conquered by the Crown of Aragon, becoming a separate kingdom also called the Kingdom of Sicily.[3]
https://en.wikipedia.org/wiki/Battle_of_Ravenna_(1512)
The Battle of Ravenna, fought on 11 April 1512, was a major battle of the War of the League of Cambrai. It pitted forces of the Holy League against France and their Ferrarese allies. Although the French and Ferrarese eliminated the Papal-Spanish forces as a serious threat, their extraordinary triumph was overshadowed by the loss of their brilliant young general Gaston of Foix. The victory therefore did not help them secure northern Italy. The French withdrew entirely from Italy in the summer of 1512, as Swiss mercenaries hired by Pope Julius II and Imperial troops under Emperor Maximilian I arrived in Lombardy. The Sforza were restored to power in Milan.
New
+. Acquired
This is that “Traditional authority” by Max Weber: https://en.wikipedia.org/wiki/Traditional_authority
So, a state with turmoil is more likely to continue being in turmoil? Is this what Putin relied upon, thinking that the DPR and LPR will be willing to change allegiance? Assuming that the fall of the Soviet Union is considered a “change of masters”?
Like, limiting the use of Russian Language.
Can we imply that those who “helped to gain it”, are more Ukrainian-enthusiastic Ukrainians? They would want total domination of Ukrainian as a language, and thus “cannot be rewarded”.
The Kievan government is already promising a lot of punishments for the “collaborators”.
Okay, first problem. “Root out” the line of “Ukrainian Government” has not happened.
How much exactly was it possible to keep the “old condition of things” to be continued? Seemingly, not entirely.
How much of their “independent spirit” is still there? I know that Brittany still speaks their own Gaelic language, which is neither Roman, nor German. What about Burgundy, Gascony, and Normandy
So, there are two problems here actually.
Note: Putin did not do that.
Hmm… Constantinople had fallen in 1453, and later Constantinople became Istanbul. 1532-1453=79. People who have remembered that are mostly dead, but the memory probably remains. 2022-79=1943.
Comments are superfluous.
Note that flats have been sold out widely in Crimea. Did this tactic actually work? Anyway, it does not seem to have worked for the Donbass.
One of the things that are hard to imagine in the 21st century. “Give them land”, haha. Try to imagine giving out free land to your own citizens!
On the other hand, if you start massively building property on the territory of a new princedom, and give it out cheaply, it might work out. Cf. Israel building settlements on the West Bank, and the Turk massively selling flats and other property to everyone on the Northern Cyprus.
Which is effectively happening with so many of the regions “pacified” in the recent years.
I think, the “feebler neighbours” here would be the Crimean Tatars. Putin, instead of suppressing the Mejlis, should have given them all power they wanted. Then he would have had obtained a relatively weak, but extremely loyal and vocal supporting group, who would have been crucially interested in staying in Russia forever. “Divide and Conquer”
I wonder, what is the original word for “physician”? What kind of medical services were even available in 1532?
Is Putin’s endeavour in Syria a similar example? Alienating friends? But who are those friends? Iran?
Hence, we see some not very bright perspective.
Interesting. So Machiavelli is making a claim that “dividing” is usually a bad idea. It is interesting to compare this with the modern world, where there have been many divided states.
In any case, dividing the Donbass between the NRs and the rest seems to be a bad idea.
Seems like the story of 2014 and 2022.
Seems like a very modern reasoning on Unitary States versus Federations.
Unitary states are hard to conquer, but once conquered, they do not rebel. Federations do the opposite. Splitting the opinion is easy, but completely subverting is hard, as there are many centres of legitimacy.
Which one did Putin choose? The third one.
A bitter truth.
Commentary superfluous.
We still have hope.
But this one is not very hopeful. Also, the important bit here is “taking arms”.
So, do not be afraid to follow other’s example, and be unoriginal. If you have Virtu, you will make your own path anyway.
That “Alba Longa” is not
No millionaires from the slums.
This seems to be exactly about Putin. And still, he has managed to stay where he is for 22 years. On the other hand, he has never been confident in his power.
What does this mean? How could Cesare Borgia annul a King’s marriage? I remember, he was a Cardinal, but I thought that only the Pope could annul marriages.
In a few months??? Seriously?
A Prince is proclaimed in order to make peace between the Nobles and the People.
How has Putin gained his power? By people or by the nobles? Can he actually govern “as he would”.
Indeed, certain argument can be made whether Boyars/Dvoryans are the real nobles, but still?
This means that no matter how much of a tyrant one is, when he loses popular support, he fails.
Putin does that a lot!
Was this the case with Putin? I mean, he was from the FSB, so presumably, he was expected to be cruel, and that is why during his first years he tried to placate the people.
This is what happened to Navalny. He was building his support on the people, but people did not save him from getting arrested by the “magistrates”.
Very true indeed.
Sure. Nobody wants to serve in the Army.
SOCIALISM! In 1500s. They have “public works”, on which anyone who has lost his job may earn a little. Note that this is not a “benefit”.
So, for the less vocal opponents you release Arestovich, and the more vocal you strip of their Verkhovna Rada membership, and even strip of their citizenship.
Indeed, during the first months Putin’s troops were bombing the cities much more than in the later days.
So, say it again: destroying civilian property is absolutely useless in a way. If anything, it helps your enemy more than you.
This is very important. Totally not obvious to newbies. People love you more when they help you. This works even better than you help them. There are exceptions though.
I don’t think I have found many prominent thoughts in this chapter.
Commentary superfluous.
This will be a big problem with the “New Russia”. For many years, the logic has been to make good police (“magistrates” in the Machiavelli’s terminology), not good arms.
Whereas, indeed, we can see that every good country that we can see nowadays, has a good army. Israel, USA, China, Singapore, Switzerland. Conversely, countries that are managed worse and worse each year, have either small, or corrupt armies: Germany (small), Russia (corrupt).
In our days, I would say, that there are “foreign mercenaries”, and “own mercenries”.
Seemingly, true. But are the Wagnerites really that cowardly? According to the reports, they are more or less like the rest.
And still are!
That is partly what Wagnerites are doing.
This is seemingly, what has happened with Bashar al Assad in Syria, bringing Russians as Auxiliaries.
Commentary superfluous.
This is what happened with the Afghan “democratic” army after the US troops left.
This is very important. In 1500s he already understands that the only meaningful reason to exist for the Government is Defence.
Putin hasn’t had a lot of respect from the soldiers recently, I guess. And he never trusted the military as well.
I should say “must do in two ways”.
Hard training - easy combat; easy training - hard combat. (Suvorov)
Can be seen as “Living with wolves, learn to hunt game.”, or “The Good must have greater fists than the Evil.”
It is essential for the Free People to learn how to act practically.
Qualities have attached praise or blame, but it is not necessary that you must follow the ones which are praised only.
Nothing is good or bad intrinsically, but everything according to the circumstances.
Liberality here means wastefulness and overspending here.
So, wine and dine your friends. Doesn’t cost much, but makes you look good.
Have never seen that in reality. The British even have the word “austerity”, which has been said too many times.
Again, this maxim seem to have gone obsolete. Roosevelt has spent a giant amount of taxpayers’ money, and is still seen as a hero.
So the Ukrainian Government is trying to get as much money from the USA, as possible. Makes sense.
Also not bad.
Again, having this image is good.
Hehe, but how do you decide? Putin seems to sincerely believe that “pointwise repressions” prevent “rapine and bloodshed”.
Well… apparently, feared is better.
Probably will happen with Putin too, eventually.
So, basically, treat other people the way they treat you. Reasonable.
Commentary superfluous.
This seems to be still observed today. Always say good things, that’s expected. But do as you must.
The Communists used “Communism” as the pledge to religion.
Sounds fresh.
Finis sanctificat media.
So, still, good government is the main way of escaping calamities.
Commentary superfluous.
Huh? Putin doesn’t seem to lose much from flying with the cranes.
I cannot repeat this as many times as needed. This should be the case with every civil country. Rules of succession should be complicated. This by itself brings balance and gives legitimacy.
It is year 1532, and they are already talking about the gun laws.
I remember that many castles in Scotland only have 3 walls.
I also remember that the Maginot Line, which failed. But also the Mannerheim line, which withstood.
Again, as long as you are not a tyrant, arming the people increases stability.
Which, again, proves that Russia is a colony.
Isn’t the KGB home-growing nazis in U.S.S.R. the example of this? Trying to use the “divide and conquer” strategy.
On the other hand, maybe he just argues about a multiparty system?
I think, we have seen several confirmations to that.
If he is actually speaking about the multi-party system, then the “opposition bloc” in Ukraine may be an example of “siding with the enemy”.
A very long thought, but, seemingly, exactly what happened to Zelensky.
Cf. Putin’s oligarchs building huge isolated palaces, and those castles in Scotland.
Proofs! Collect your chips, Sir! Roleplaying awaits: patents, papers, whatever.
Interesting. Seems like that Army Principle: sweep your drilling square with a digging bar, not a broom.
Here he is speaking of the actors of the same statue.
Hm. Seems like this stance has become complete obsolete. Switzerland, Turkey, China, seem to be only becoming more respected due to their ambiguous or neutral position.
Here we can have a look at the “general population” of Russia, who are seemingly neutral and “not political”, who think that his is the safest way. But they still bear the tax burden, and the sanctions.
Europe supporting Ukraine? Who is weaker than Russia, but with Europe combined may happen to be stronger?
In simple words:
Too true.
Also, true for almost all civil servants over there.
Important!
This is one of the most important thoughts of this book. Some controlled degree of flattery is unavoidable.
Heh. True. Never tell anyone your plans.
Ask people for advice! Really. They will reveal their thoughts.
True, although not always. True, because it allows filtering the noise. Not entirely, because still, getting other people’s thoughts is important.
I think, this thought is still the biggest harbinger of hope for the Russian Freedom. The ideology is transient, the life is real. The Fridge will beat the TV eventually.
Times have changed indeed.
Young are less scrupulous than old? Does not seem to be so in our times.
Who are the “barbarians” he is writing of? The arabs?
Our country, left almost without life, still waits to know who it is that is to heal her bruises, to put an end to the devastation and plunder of Primorie, to the exactions and imposts of Krasnoyarsk and Krasnodar, and to stanch those wounds of hers which long neglect has changed into running sores.
Indeed.
Indeed, the skillful in arms will not obey, and nobody has yet risen to be respected enough.
National troops
Emotional statement!
This files is about “how to design a good networked FS in 2022”.
At the moment we have three kinds of network synchronisation tools:
NFS is immune to conflicts, because it relies on not storing files on the target machine. It is also quite fast and does not occupy too much space on the clients.
However, it cannot pre-cache anything, and is relying on having a fairly low ping, because it constantly performs synchronisation.
Google Drive allows selective sync, can work completely offline, but has difficulties resolving conflicts. It also requires the user to select cached files manually.
rsync makes synchronisation efficient, but is completely manual.
Fundamentally, the problems are:
For simplicity, let us assume we have a client-server infrastructure. This is not necessarily the case, but for simplicity let us start there.
As said above, main problems are synchronisation and conflicts. When do they happen?
这篇文章记录了几个我阅读的时候使用的技巧。 它并不详尽,也不完整. 这些方法可能并不是最有效率的. 我希望大家可以进行讨论并给出改善的建议. 我打算在这里写下一些我认为有用的方法和使用这些方法的理由.
如果电脑能辅助我们更好地阅读,我们应该采取这些方法。 我平时会做以下尝试:
除非是有插图的书籍,否则黑白打印就足够了。
我在一家印刷店打印它,做纸质的封装,然后在书籍上用马克笔手写书名。
我大概90%的书都是PDF 我把PDF在Evince或者Emacs-pdftools上打开。(你可以使用你最喜欢的软件) 至少这可以简化搜索。 剩下的10%书大部分是HTML或EPUB,所以我把他们转变为PDF,或者使用Evince直接打开。
我发现我一般都会使用谷歌辅助我的阅读,所以为了保证快速访问,我会在浏览器上提前打开谷歌。
大部分时间我使用谷歌翻译,它的功能可以满足我90%的需求。 剩下的10%我使用Wordreference,BRKS或者MDBG。
我使用自动铅笔 (我不喜欢削尖铅笔),0.9mm,HB。 我也有一个橡皮,相对硬。 我不使用彩色铅笔做笔记。
了更专注于阅读的文本,我使用尺子覆盖下文。 有时候我也使用它划定文本的重要部分。
阅读中文资料的时候我使用它。 如果需要在词典中查找不认识的字,使用手写输入法比使用偏旁部首更简单。 有可能使用仓颉输入法更简单,但是我还是不会。
我使用Emacs org-mode时间记录的功能计算读每一本书需要的时间。 所使用的功能也让我能更好地集中精力于阅读过程, 因为我需要精确记录我的阅读时间。
我力求为每个我阅读的书配一些备注文件。 Emacs以及其他专门软件拥有备注功能,但是我还不会高效地使用它们。 上述方法只适用于文学 、娱乐、或者虚构的小说。 阅读科学书的时候我使用别的方式。 上述的备注文件是org-mode文件。 它由两部分组成:词典和批注。
有人推荐用尺子遮挡住部分文本来更专注地阅读的方法,加速度. 然而我使用尺子覆盖还没有读的文本,防止我注意力不集中. 我把尺子放在在我读的文本行. 读完一行,我把尺子移动到下一行。
为上述,我不用彩色铅笔做备注,因为我不知道如何有效地使用它们。 (您的反馈可以帮助我变得更好。) 颜色以外,还有剩下的标注工具:
颜色以外,还有剩下的标注工具:
我一般在重点词组下面画线. 如果反复阅读后我对他们仍然感兴趣,我会把它们拷贝到备注部分。 我会圈出不认识的词语。 然后把它们复制到词典节点。 有时候,如果我觉得有些较长的段落没有太大意义,我在框出这个部分,并在上面画叉。 有时候我在旁边增加备注。 有时候我在两行中增加备注. 上述的两个方法都不太有效。
我平时遇到不认识的词组,我会在网上搜索,然后把它放进生词表里。 我也会搜索我不知道的概念,但是不一定会记下。 (是不是我得添加另一个备注文件的节点?)
没有必要像完成大学论文一样去写. 这样会浪费时间. 但是尝试记录好词好句和与众不同的思想.
如果您在此博客或其他页面中发现任何有用的内容,请订阅并打赏。 请您转发、分享和讨论,您的反馈可以帮助我变得更好。
http://lockywolf.files.wordpress.com/2021/11/001_book-cover.png
我读完了一本Craig Scott的书,叫做“专业CMake”(Professional Cmake). 这篇文章是我的对这本书的评论。 这篇评论确实会短一点,因为我不觉得它拓展了我的认知。 我决定为所有我读过的书写书评,所以这本书不例外。
因为我的工作语言需要转变为C++,所以我需要学习cmake。 在新公司,项目构建系统并不完善,所以我致力于把它更新和重构(重建),转变到现代的cmake。 自然地,我搜索过cmake相关的参考资料。虽然cmake官方的说明书很丰富, 但是我还希望找到某些更通俗的资料。 (在学习新领域的知识时,我通常会搜索一些叙述某些工具使用方法的书。) Craig Scott的书相对有名且容易搜索到,有很多好评,所以我试着阅读这本书。
这本大概可以分为三部分。 第一部分为读者提供了此工具的一般概述,描述几种最常见的使用模式,并简要介绍了它的历史。 第二部分描述cmake的主要的应用领域 – 构建软件项目,即将一堆C++(主要是 C++,虽然也支持其他与 C 相关的语言)组装成一个工作系统。 第三部分,基于第二部分,继续说明cmake的功能,在强调cmake的不常见的的功能,尤其是内装的测试软件工具,CTest,和一个密切相关的测试结果仪表板 Cdash。
表面上,这本书似乎只是重述官方资料。 不幸的是,这本书真的是这样。 显然,写科技(或软件)相关的书时很难完全避免让读者参考官方资料,但是我感觉这本书过多使用了这个方法。 把"Autotools: a Practitioner's Guide"和“专业CMake”作对比官方文档的摘录数量要大得多。 然而,这不是我们通常阅读技术类书籍的目的。 反而,我们之所以读技术类书籍是为了获得工作上的感知力“使用你知道的,找出你未知的”。 另一方面,由于理解这个需求,“Professional CMake”的每章结尾都有“recommended practices”子章节。 它们都是按照作者的观点写的,可能是旨在提高对cmake的感知力。
这位作者显然很喜欢cmake。 他称赞它的“policy”系统,该系统允许用户使用的时候调整cmake的功能来匹配老版本, 同时,它允许对兼容性进行调整,向代码库添加功能的便利性,部分原因是它具有良好的可读性。 但是,我更赞同这样的观点:一种好的程序语言不需要不断增加更多功能。 当然,构建系统一定会需要某些特有的功能,即迎合具体的编程和操作系统又运用不良手段。 在我看来,cmake做过了头。 也许,不如 GNU Automake 所做的那么多,但是花在编写算法上的努力本可以花在改进 cmake 语言本身上。 对,你们没有误读,cmake首先是一种编程语言。 在 C/C++ 生态系统中引入了另一种语言。 为什么我们的工具箱需要附加的一个语言? 好的,可以说 POSIX Shell,尤其是它在 Autotools 的使用中是一种糟糕的语言。 甚至使用分号分隔列表也是一个奇怪的想法。 但更重要的是——也许这本书最大的缺点是,作者似乎没有意识到这些缺点,而且对这些缺点避而不谈。 对于试图创作专业书籍的人来说,这是一个重大的失误! 对我个人而言,最令人失望的是,作者貌似不仅不理解Macro和Function的区别,而且不明白他们存在的目的。
“专业的CMake”可以完成这项工作。 如果你需要一些不那么快速的cmake介绍, 希望看到比随手可以搜索到的网络入门博客文章更深入,但不像官方说明那样重量级的资料,你可以选择阅读它。 这不算是一个糟糕的选择。 如果您遵循本书的教程,您也不太可能编写出旧的、难以维护的 cmake 代码。 但是不要期望这本书会让你成为 CMake 专家,或者给你关于难于理解的 cmake 手段的预报。 所以请你降低对这本书籍的预期。
如果您在此博客或其他页面中发现任何有用的内容,请订阅并打赏。 请您转发、分享和讨论,您的反馈可以帮助我变得更好。
I wanted to turn this file into an essay on how to make a decent file system tree. I failed, as this task turned out to be unimaginably harder than I had expected. I, thus, promote this file to the status of a "living document", where I may be adding feature as I happen to find them convenient.
Computers are unreliable, misleading and oftentimes overtly lying. It is possible to make computers efficiently assist you in everything you do. However, learning to do so requires managing a large amount of material and takes a lot of time. This file proposes a few guide lines that the author found helpful in managing the computer’s file system structure. Even though there are many services that provide “outsourced” management of certain kinds of computer resources (such as Google Photos, Gmail, WordPress), and they may be used when appropriate, it is still necessary to understand the underlying principles of data management. Ignoring them leads to a chaos that is hard to navigate, only in the case of online services it is offloaded in to the public computing system.
Generally speaking, this document consists of three intertwined topics: brain modelling with a file system graph, making backups, and managing tasks. No doubt, it is much more leaning toward the author’s own style of managing personal data, but hopefully, there may be a way to reuse the ideas for the readers’ benefit.
Sometimes you see certain numbers advertised by equipment manufacturers, shops, service providers, et ceters. Do not believe them, test everything yourself. Your HDDs will be 1.8 Tb instead of 2Tb. Not a huge lie, is it, no more than 10%. But if you have your system planned out to byte, it’s going to be a huge waste of money and time to buy a disk that doesn’t fit your requirements.
Somebody promises you 150 Mb/s speed on a wired channel? You’re probably already aware of the fact that official numbers are exaggerated, right? So you reasonably make a discount of, how much? Like, 10%, right? 130 Mb/s? You are wrong. In an adversarial case, that is a real case that is created by various interacting components at your system, you are going to attain 1.5 Mb/s at best. Divide by 100 ever marketing promise, that’s going to lead to less disappointment.
The only reliable medium nowadays is paper. Yeah, if you’re Byron or Mo Yan, your ouvres may happen to be mirrored by human memory, but I wouldn’t rely on that.
HDDs, SSDs, everything fails. Moreover, everything fails often.
Don’t get me wrong, having someone take care about your data is a great stress-relief. Just do not overestimate the reliability of those services. At some point you will buy a holiday trip to the country where your cloud provider is, not even blocked, just choked by the low quality of the network.
Everything needs to be copied five times:
The previous section, Reservation , may seem a bit radical to many people, but it is justified by convenience more than by paranoia. When you need to access your data, you frequently need the easiest, the most convenient way of reaching it. Redundancy here is a way to achieve ease.
Naturally, redundancy is an opposite of deniability. The more copies of data you have, the harder is to clear up the evidence.
Therefore, the two biggest threats that this article is considering, are:
Unauthorised access is considered a problem, but of a less importance than losing data.
In particular, this manual suggests daily backups of the whole disk of your computer. The easiest way to achieve this, is to have a spare backup SSD in your backpack, and do a disk synchronisation every morning at the beginning of the day. With rsync and USB3 this should be fast enough.
This is not automatic, but making things manually is less likely to silently fail. However, this also means that everyone, who has access to your backpack, can steal all of your important data, by stealing the backup SSD, which is way more portable than your laptop.
In addition, although SSDs are fast, they also die quickly. Therefore, as second backup to a spinning magnetic medium is recommended too.
But magnetic media are slow, therefore I would recommend doing magnetic backups overnight. Although a lot of technical data on the laptop disk is changed at night, when indexers and disk upkeep utilities are doing their job, the important data would still be saved.
The magnetic disk is better to be left at home. This means that, again, a robber may get access to it, but you are at least partly ensured from losing all your data together with the laptop, which you may just forget somewhere.
The best backup is the one that is easy to restore.
Restoring a laptop backup, ideally, should involve only replacing the internal SSD with a backup copy.
Restoring a dead NAS should, ideally only involve replacing a dead rood ssd/sdcard with a nightly backup.
On servers, perhaps, a RAID-1 (mirror) is good enough, if you have a rebuild command written in some very accessible place.
Unfortunately, doing a backup on a smartphone is much harder. Although it is possible to make disk images with netcat and dd (google it), restoring those images may prove to be infeasible due to encryption and other digital signatures mindfuck. The answer would be to never keep anything on your smartphone that is not integrated into your main brain model. (Be it a laptop HDD or a cloud drive.)
So, when you lose/break you phone, you still have to reinstall the apps you had, but that is not that much of a problem, since most of that data is in the cloud anyway. The rest can be synced back with Syncthing.
An Oligarch’s cloud is likely to be good enough to keep your phone data (Google, Huawei, Samsung), but it is prone to banning. Therefore, having a personal cloud (even be it way less performant), allows you to quickly switch to an alternative storage, when you are banned.
The parts of your brain that are not at an Oligarch’s cloud, can be fetched from the phone over Wifi with Syncthing, as long as it gets connected to the mother-ship at least once in a while.
In this world, there is a digital model of you. It is not a single model, and it is mostly not entirely an electronic model, it may very well be on paper, or rather, papers, held by various institutions that you had happened to give your data to.
Your schools probably hold a bit of files on you, your work units, and your military service. Police, even if you haven’t done anything wrong, and just drive a car, has a profile on you. Of course, Google an Alibaba have something. You boyfriends and girlfriends, relatives and pals.
Of course, the person who holds most on you is probably your enemy, if you are honoured to have one. Ow, how exciting it would have been to find out what you enemy has on you. He’s not you. While for most people, the choice between sorting out a personal archive and having a game of Dota is not at all obvious (even for the most conscientious of us), for our enemy it is crystal-clear, nobody would be mining data on you with the same diligence.
We do not usually think about this data in terms of a file system. However, from a data engineer perspective, it is a distributed virtual system of data blocks. Not every data block is on an actual disk, but each one has some kind of an “address” via which it can be reached.
These addresses are usually incompatible, but why can’t they be made compatible? At least some of them can.
Now I have to start speaking with a bit of technicality. For example, the subsystem called “fuse” allows a programmer, with a bit of work, make a huge variety of addresses compatible with the addresses that your files have on your computers.
Is the file system system of addresses exhaustive or the best possible? Likely, no. But, surprisingly, the set of primitives that we have developed for working with files, while staying minimal, is still tremendously powerful. You can exceed this power, indeed, but this requires a giant increase in complexity.
“Mind mapping” as a name was invented by Tony Buzan to describe his own paper-based protocol for recording data.
It is often known under the name of “concept mapping”, and is frequently praised as a “totally different way of thinking”, but really is just the popular explanation of graph theory.
Which does not make it worse, of course, and in fact, noticing this immediately made me think “hand on, but my file system is also a graph”.
Indeed, under relatively mild restrictions (mind map conditions), a lot of things can be represented as graphs, and graphs easily map onto a file system structure. Conversely, this immediately leads us to the conclusion that our data can be visualised as a graph, and this can give us useful insights.
With this thought in mind, I tried drawing down the “my life” as a mind-map-like graph, placing various aspects of my life on that sketch.
Many things went there: studying, job, friends, relatives, hobbies, various government and society-related things.
And immediately it became apparent to me, that:
Okay, so the two most popular approaches are not working and will never work.
What is left to do?
Well, that is why this document is called a “living document”. I haven’t found the answer. However, I found a few tricks that have made my life easier.
There are a few tricks that are worth considering when designing a human-interacting system.
There is a noticeable disparity between the places we expect to see things, and where we really see them.
You may have an excellent task manager, but it will not be of use if you do not open it. And conversely, if you see unexpected things where they should not be, you are more likely to react upon them.
Imagine your wife leaving you a message inside the code file or the document you are currently working on. You are much more likely do something!
Actionable insight I am configuring my system to put reminders, and notifications right at the home directory.
The title is a little be clickbait-ish; in fact, human attention volume can be larger, say, up to 14 items, but the scale is about that large. If you are having more items in a directory, your brain will select it’s own native number of items (7-14), and will ignore the rest.
It particular, it means that it is likely that each directory you make, in general should have no more than “your natural” number of items. Self-check: my home directory (“~/”) has 28 items, and I ignore most of them, except about 7. However, I do notice all unexpected files in a directory quite quickly.
This number is trainable, as most human skills are, but not extensively. You, perhaps, can raise it from 7 to 14, but not to 50.
However, I know two ways of “tricking” this number.
The first way is to give shorter names, or even hide the directories that “you know are there”. Since you remember that they are there, your brain ignores them when it sees them, but you can still see them at the visualisation interface (make sure you have one).
The second exception to this rule is case when items are somehow dependent on each other. If the items have some natural ordering (perhaps according to some date, or a human name), you can have more than your “fixed number”.
Why am I so keen on increasing this number? Can’t we just make groups and subgroups? The answer is “not really”.
Each time you go inside a directory, you are having a context switch
, which means you are loosing a bit of context.
In other words, the depth of your file system tree also matters.
It matters less than breadth, but still.
Keep your brain data structures tight.
A dashboard is a misleading thing. Remember the trick that I have given in the previous chapter that can be used to increase the number of items in a directory? (Adding “implicit” items that your brain ignores.)
Here we see the same effect, but in the opposite direction. If you have a dashboard, you are getting “a feeling” that you are up to date with the information, but in reality, your brain starts to ignore things it is getting used to. At least give your dashboard more contrast.
I still have one, and I do have a habit of checking upon it, but it is less useful that I hoped.
Notifications are vital, which also means that they are extremely expensive. Notifications can save you a lot of grey hair if they arrive timely and warn you about something important, but many notifications will blow your mind, they are very expensive to process.
Opinion point: This is why “free”, commercials-funded services are in reality much more expensive than those you pay for. Paid services just eat your money, you can make new. “Costless” ones are eating your life, and you are not getting a new one.
Heuristic: if you cannot keep your notifications at their place, the (bad) trick is to subscribe to too many. Yes, you are loosing important ones, and losing the ability to get early notices, but this is still better than having your life eaten by ads.
Another important point is to get notifications “when and where” you need them. It is not much help to get an important notification from your server while you are driving your car. You cannot react on it, and thus you are: (1) losing energy on processing this notification, (2) losing energy on rescheduling it, (3) maybe wasting time on mitigating it.
Is that obvious?
Essentially, there are two ways of getting new “TODO” items into your list:
TODO items is what the skeleton of your life consists of. It is important to notice that the organism does not only consist of the skeleton. The “taste” of life, the “moments of happiness” are impossible to get planned, but if you do not have a solid skeleton, those “happy moments” have nothing to get entangled in and hooked upon.
This is not obvious! Why aren’t “files” those items? Informally, because files can be seen as different “faces”, “views” of the same “thing”.
In fact, you never know when things that you are experiencing in your life are going to grow in abstraction, and turn from a file into a directory. It is better to always start from directories. (WWW Consortium agrees with me: https://www.w3.org/Addressing/)
But the point is – you never know where they will go. If you are going to a dancing party and making a directory for a ticket purchase, it may later turn into a directory for dancing textbooks and videos, or maybe into a directory of cocktail recipes, or a directory of cool dancing places.
But you would still want to also keep this directory in the “tickets” catalogue.
(This is what you need symbolic links for.)
Yes! And this is a problem!
I am trying to use both symbolic links, and hardlinks in order to make the system VFS (virtual file system) match my brain, not the distribution of data on hard drives and clouds.
It works not very well! Suggestions welcome! But so far I have created a fairly reasonable structure from symlinks, bind mounts, and regular copies of file trees with rsync.
Even if your file system structure is decent, you will forget where you put stuff, and you will find yourself exploring your mind map as if it is alien to you. (Sometimes this is also exciting.)
Thus… help your “future self”. Annotate everything that can be annotated, you will thank yourself a million times later.
Context will also help your automatic tools be more productive. I will say a bit more of that later.
The most obvious place to add context to your files is their name. Yes, it is not very flexible, and frankly quite bodgy, but it is the only place that is at least remotely reliable in computing.
There are other places, but they are more specialised.
One more place that is worth considering – is your file headers. You can ofter put the vital context information there.
Context includes:
How does a human’s brain work?
We have “Projects”, “Events”, and “Categories” . Projects are limited in time and scope. Events are limited time only. Categories are limited in scope only.
There are also “tags”.
Suppose you are studying Chinese. This gives you a category “Chinese”, under which you would be creating your stuff.
Suppose you are joining the University of Edinburgh. This would give you a category “Uni”.
In year 2014, autumn semester, you are joining an introductory course in Chinese, in Edinburgh.
This course is definitely a project.
You’re studying badly, pass your exam so-so, and get the artefact, the diploma.
You leave Edinburgh, but still keep studying Chinese. In your spare time you are working on the exercises from the same textbook.
Can you write your solutions into the same project? Apparently, no, as the project is already closed.
A file system is a tree. A git repo is a DAG, Directed Acyclic Graph.
You can traverse a tree naively. You can traverse a DAG in a smart way.
However, human brain is not a tree, and not even a DAG. It’s a general-purpose DG.
You can traverse a DG too, but you need to be much smarter than usual.
How would you like to organise your brain? Keep in mind that there should be some data structures available for shared usage with other people and robots.
Let’s take a simple example.
You have a directory called ~/Music , where you put your music. You have a directory called ~/People/Mom , where you put things that are related to your mom.
For example, your Mom likes a band called Gogol Bordello, and you also like the same band. Would you put it into ~/Music/Gogol-Bordello, or into ~/People/Mom/Gogol-Bordello? The problem is exacerbated by the fact that you may need to update the names of directories.
In the file system visualiser, I am using soft links to associate directories. But in general there seems to be no good solution yet.
The heuristic here is: first build your hardware/software synchronisation, later build the semantic harness.
Things to consider:
cp
, it will screw up your dates and perms.swap
kills SSDs, networks are slow. rsync
root and home to a backup magnetic spinner.The main difference between 3.2.2 and backups is that backups are restorable objects.
Data Dumps are file system subtrees or, sometimes, archives, that usually appear as a result of using a non-specialised tool for “saving” some data in a dangerous situation, instead of using a special-purpose backup tool.
They are usually non-restorable.
Git and friends. Try to store all your text files in a VSC, it pays off.
Reconciling the differences between two copies. Often used as opposed to merging (two conflicting copies).
Taking two versions of the same file, developed separately, and combining to create a single one.
Storage that is frequently emptied. For example, a tmpfs.
For a personal laptop, every non-resumable operation is limited by a 8 hours time window. Because realistically, every operation should be done at most overnight. Every resumable operation is limited by 7*8=56 hours, as that is the amount of time available during the week. Practically, a backup that is more than a week old is useless.
When you have two “more or less similar” copies of a single directory tree, you are in a big trouble. Now you have to combine them somehow, and get a “master copy”. Not easy.
A well-tuned computer need to run tasks for self-maintenance. On Windows, many people were used to defragmentation and disk checking. On Linux we still need disk checking, file system checking, and several other upkeep operations.
Some things cannot be done by a machine. For example, when you need to connect a backup HDD. Those tasks you need to plan in advance and enforce yourself. This is hard, but worth learning how to do.
A not so bad tool to find which of your directories take up too much space.
Do not use fdupes.
Do not use rdfind.
Checks your file system for errors.
#switches to fsck.ext4 mean the following: -c run_badblocks_ro -c -c run_badblocks_nondestr_rw -C 0 show_progress -f force_check -k keep_old_badblocks_list -y auto_repair_yes -t -t print_time -v verbose echo time fsck.ext4 -c -c -C 0 -f -k -y -t -t -v /dev/sdc1
rmlint
recently. It is a bit weird, but at the end of the day turned out to be more reliable and tunable.
It is an excellent tool to use for Subtree Merge. Highly recommend.
https://www.speedtest.net/apps/cli
exists for ARM 64, and has a huge database of servers
speedtest -s 26850 would do a test to some server in Wuxi, China
If you still do not use it – it is time to start. Learn it well, and it will help you a lot when you “kinda” know where your file should be.
The excellent “regular expression search tool” to use for content search within files you “kind of” know where they are.
Learn it and start using it. It’s a great tool for super fast search of “stuff that was out there somewhere”.
It’s an amazing, very fast and efficient desktop search tool. It takes time, maybe, days to index your drive, but contrary to Gnome’s Tracker and KDE’s beagle, it actually works. The database is huge and you probably need an SSD for it.
I do not use it that much, because with a good FS structure you can be doing find/grep many times more often, and with good context you can just get by with “locate”. But in those cases when you “do not really remember”, recoll helps you “recoll”.
Rsync is an extremely versatile tool with an extremely fragile syntax
The following will copy everything from root to the backup root.
Combined with rmlint
, it can be used as a 3.2.8 tool.
In general, it is hard to use, but much-much better than just cp or scp.
Lets you resume your transfers, do incremental backups, getch backups from remote machines, and a lot of similar things.
echo time rsync --links --partial --fuzzy -arHAXyh --info=progress2 / echo time rsync -v --archive --hard-links --acls --xattrs --inplace --one-file-system --del --fuzzy --human-readable --info=progress2 --partial --dry-run from/ to
Indeed, see bug https://github.com/WayneD/rsync/issues/131
This is very important for modern restrictive ISPs.
I fake it with the following code:
time while ! rsync <...> ; do sleep 30 ; done
Is a very versatile tool for downloading all kind of stuff. I recommend it. It can download through ssh too! sftp is actually ssh
time a2 –max-tries=0 –ftp-user=username –ftp-passwd=
Where a2 is an alias for alias a2=’aria2c -l /tmp/RAMFS/2021-01-06T13:37:11+08:00-aria2-download.log -x120 –min-split-size=148576 –split=120 –auto-file-renaming=false’
You need an aria2-nitro patch to allow 120 connections with 100k splittings.
Important! by default, sshd has a built-in DDOS protection setting MaxStartups 10:30:100
You want to set it to 120:30:220 or something. But be wary of a real ddos.
Keep all you literary work in git. I use magit on Emacs, console git in console, and mgit on Android for my diary synchronisation.
A kinda fragile, but still extremely useful tool for synchronisation of machines, and it can also protect you from a bit of regrets after deleting things automatically. Put in Syncthing things that you cannot put on git.
It is that tool that lets you extract all the valuable from your photographs.
This is not really about organising files, but rather about creating files, but I cannot avoid mentioning it here, because org-mode is very versatile tool, and you can build a lot of your personal information management system on top of it. You can add to your files with ease. You can also add cross-references without much difficulty.
I use to plan my tasks on the desktop, write documents and articles.
I use it on Android to edit org-mode files. They are actually Markdown files, but for my purposes it doesn’t matter. Markor is really worth having a look at, because it lets you take photos and other notes with . Meanwhile tags are created as well.
Lets your run tasks periodically. Worth learning.
Those relatively nice tools that let you donate all your data to an oligach in the name of his business interests. It is also quite convenient and works as an additional backup for your files. The killer-feature is making the files available on your phone without synchronisation.
If you can, avoid it with the help of NextCloud. But probably you won’t be able to.
A replacement for Dropbox.
A way to keep your email locally and read it without internet. Is there a way to use it as a file system? Or visualise?
A way to keep your contacts on the local disk just the same as they are on your phone. Is there a way to use it as a file system?
A way to keep your diary records on your local disk for indexing an search. Is there a way to use it as a file system?
Lets you backup Skype.
A relatively unified format for keeping archives of things that have a natural order in time. Consider exploring RSSBridge or other anything-to-rss portals to avoid Internet Giants’ trickery.
A tool to create a 2.3.3 on your local computer. Useful for outputting the scripts that check that all of your synchronisation machinery works.
A way to add valuable to your files.
The tools are called setfattr
and getfattr
.
You must have them configured. They are the only thing that can give you at least some thing of a warning before you HDD dies.
are actually tar files, and can be unpacked as tar xvf
https://stackoverflow.com/questions/36819474/how-can-i-attach-a-vhdx-or-vhd-file-in-linux https://download.libguestfs.org/1.43-development/libguestfs-1.43.3.tar.gz
It needs hivex and supermin. Damn, a lot of work. I build hivex from github tarballs, but redhat disapproves this.
guestmount –add yourVirtualDisk.vhdx –inspector –ro /mnt/anydirectory
I didn’t manage to mount this with qemu 5.0. Trying to build 5.4.0rc4. Also failed. And Windows 10 in vbox failed. Perhaps, the file is actually broken.
Living without a VPS is hardly possible nowadays. You need it for every task which needs a public address.
You may have to keep some data there, but it is usually expensive, and it is “another person’s machine”. So ideally, the file system should be encrypted (from the hosting company’s technicians).
You need it, because you cannot keep all your data with you all the time. This is where you will keep most of your data, as well as backups.
You have to be prepared that you may drop it and it will die. Or you root ssd will die. Or your home ssd will die. But you will still keep your useful data there.
It is worth designing your system in such a way that losing your phone is sad, but not too much. You need to backup or synchronise at least:
Encrypting your phone will likely make it a huge pain to extract data from your phone if the screen is broken. I used to belive that having a regular backup (a nandroid or a dd image) is a good idea, but not any more. Keep your stuff on your laptop or NAS, not your phone.
If it has locked forever – just break it with a hammer and destroy the memory.
Is usually integrated with a messaging/scheduling service, and is thus convenient. You will need it as an interface to your friends who are controlled by the machine anyway.
For root drives, it is enough to keep a backup root drive in every machine and do an rsync backup every night. You can just plug the drive instead of the main one if the main one dies. For the laptop, you have to carry a drive with you, because laptops usually don’t have 4 drive bays.
You need at least two backup drived for your laptop, because ssd’s tend to die “suddenly”. So you will have an ssd drive to backup your home partition every day quickly, and a more reliable hdd to backup the data overnight.
Those tend to die extremely quickly, so have a few in your pocket, ideally, identical. It’s worth having them linux-bootable, and have a few root directories:
In this section I have collected several tricks for helping me keep my file system tidy. Most of them can be classified into two groups:
Doesn’t work yet for “all apps”, only root ones
# !/system/bin/ sh while [ "$(getprop sys.boot_completed)" != 1 ]; do sleep 1 done su -mm -c mount -v none -t tmpfs -o size=4g,nosuid,nodev,noexec,noatime, context=u:object_r:sdcardfs:s0, uid=0, gid=9997, mode=0777 /mnt/runtime/write/emulated/0/Download/tmpfs-cleared-on-reboot > /sbin/.magisk/img/.core/mount.stdout.magisk.log 2> /sbin/.magisk/img/.core/mount.stderr.magisk.log
I have a separate “howto”, which is more of an example, on how to draw a data flow diagram for your cloud. Visit the article.
Unfortunately, such a file is hard (if not impossible) to generate automatically, and updating it is a pain.
But even so, getting a high-level overview of what your digital brain is like, is priceless.
I generate the file system map and print it on a giant three-by-one metres poster on a wall.
tmpfs /tmp/RAMFS tmpfs nosuid,nodev,noexec,sync,dirsync, size=4G, mode=1777 0 0
Note: I mount /tmp as tmpfs, and /tmp/RAMFS as tmpfs-noexec. /tmp needs to be exec for building packages.
Why would you want that?
Because files that you download steal the space on your HDD, and, what is worse, they steal your attention when you are browsing your disk and/or searching in it.
You do not want to keep any useless files for less than needed.
Transmission->Edit->Preferences->Downloading->Automatically add files from-> /tmp/RAMFS
Transmission keeps .torrent files in ~/.config/transmission/torrents
This is the same idea as in the previous paragraph.
Do not store anything worthless, and save time on navigating Transmission’s interface.
I didn’t find how to do this, I basically mounted /tmp as ramfs. Firefox creates a directory called “${usename}_firefox0” and downloads stuff there.
echo find . -iname 'desktop.ini' -delete
find . -type f -empty -delete
Worth doing after 3.5.9
find . -type d -empty -delete
echo rmlint -T "df" -c progressbar:fancy --progress --no-crossdev --match-basename --keep-all-tagged --hidden --must-match-tagged ~/Incoming/ // ~/good-dir
Run on about 400 GB.
lockywolf@delllaptop:~/Incoming$ rmlint -T "df" -c progressbar:fancy --progress --no-crossdev --match-basename --keep-all-tagged --hidden --must-match-tagged ~/Incoming/ ~/BACKUP/ // ~/books/ ~/Data/ ⦃⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⦄ Traversing (585350 usable files / 0 + 0 ignored files / folders) ⦃⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⦄ Preprocessing (reduces files to 107623 / found 0 other lint) ⦃⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⌿⦄ Matching (59120 dupes of 44798 originals; 0 B to scan in 0 files, ETA: 50s) ==> In total 585350 files, whereof 59120 are duplicates in 44798 groups. ==> This equals 81.36 GB of duplicates which could be removed. ==> Scanning took in total 1h 29m 10.406s. Wrote a sh file to: /home/lockywolf/Incoming/rmlint.sh Wrote a json file to: /home/lockywolf/Incoming/rmlint.json
You can use it to share files over your VPS right from your phone.
Rewrite it with a batch like xargs or smth.
With pkgtools-15.0-noarch-41, the installpkg can be modified to obtain the $SUBJ behaviour by adding
( # line 670 cd $ROOT/ grep -v '^install' "$TMP/$shortname" > "$TMP/$shortname"_noinst xargs --arg-file="$TMP/$shortname"_noinst --delim='\n' setfattr --name=user.slackware_v1.installpkg.package_name --value="$shortname" rm -f "$TMP/$shortname"_noinst )
to line 670 of /sbin/installpkg
and makepkg can be modified by adding
find ./ -type f -exec setfattr --name=trusted.slackware_v1.makepkg.package_name "--value=${TAR_NAME}" {} + # line 414
to line 414 of /sbin/makepkg
Oftentimes I have the following task: I am working on some task that can be published later. For example, I am reading a book and writing a review, which produces two files. These files I keep in a git repository for “book reading”.
There is also a repository for the blog posts. I do not want to synchronise the same file in two different repositories. I also do not want to check in pictures into a git repository, because git is not very efficient at working with binary data.
So I make a hard link of the review file. This way the git-versioned file gets all the changes instantly, even if the file is modified on a different machine. Moreover, if I update the file from a git perspective (adding something to the review), the changes automatically get into the Google Drive directory.
There is a huge caveat here:
Some programs, notably, Emacs, by default make backup files by renaming.
That is, a file is renamed to be called filename.bak
, but it is still hard linked with the old “primary brother”.
In order for Emacs to not be bad here, you need:
(set-variable 'version-control nil) ; should be t, but breaks hardlinks (set-variable 'backup-by-copying-when-linked t) ; --//--
Should be doable with a kernel module, or something.
Kprobes
? uprobes
? systemtap?
Supposedly, tracker should be tagging stuff automatically. Didn’t try it.
declare DISK DISK=sdc cd /root # -b block_size -w destructive -s show_percentage -v verbose # echo time badblocks -b 4096 -n -s -v -o /root/"$(date --iso)"-sdc.badblocks "/dev/sdc" echo time badblocks -b 4096 -w -s -v -o /root/ "$(date --iso)"- "$DISK".badblocks "/dev/$DISK"
With USB2.0 estimated time – ~75 hours? Hm…
Didn’t write down the USB3 speed.
Esata speed 0.1% – 1:50. Which makes is ~33 hours to just write one pattern?
Manual TWRP without “ sdcard”, all partitions, via TWRP to an OTG disk takes:
3389 seconds, that is roughly ~1 hour, in total 41Gb.
Storage is not yet measured Started
– – Not finished 4 hours is not enough, but the phone battery is dead.echo dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync echo dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync
echo time rsync --archive --hard-links --acls --xattrs --inplace --one-file-system --delete-before --fuzzy --human-readable --info=progress2 --partial / /mnt/hd/ --exclude= '/tmp/*'
This section is trying to roughly outline in which manner things above should be implemented. A good file system structure almost automatically implies a good backup system, because you have to fetch the “subtrees” from all the devices that you have, or/and export the data from “silos”. And if you have that tree-building system in place, you may just as well add a backup system too.
You will likely have to backup everything that is listed in the “software” above. When doing backups you have to consider the following:
I generally try to support each backup procedure with two auxiliary subprograms.
One run together with the backup itself, and notifies me if something goes wrong. The backup itself does not notify me, because the backup often goes wrong. E.g., some backups I run every 5 minutes, but they succeed only once or twice a day, because only at that time the required device is nearby. But there should still be a service that sends you an email if backups are failing for too long.
The second proceduree updates my Dashboard and paints bright red things that have obsolete backups.
I have a “BACKUP” directory, on my laptop, which is roughly 2-level structured, as in:
class/application
, e.g. 01_Messengers/ICQ
.
This directory is getting “scheduled backups” every time it can, with regularity ranging from 5 minutes to 1 day.
My rough list:
Additional operation to do overnight:
locate
recoll
Not everything can be done automatically. The easiest example is the portable SSD that you backup your laptop to. You need to plug it in, as modern laptops just do not have a bay for a separate drive. This is getting even more important as new laptops have the NVMe storage soldered onto the motherboard, so if you break your laptop, you cannot even take the drive out.
This section has a rough outline of what is probably worth doing.
This file is to record certain random observations that I would like to later return to.
On
I cut my finger at home, in the kitchen, while cooking. Since I am a curious person, and since it is just interesting to study oneself, I have decided to record how much it would take for a finger to heal.I immediately stuck a bandage on a cut, hoping to prevent the loss of "meat". I did not use any disinfectant, because the conditions were relatively clean, and I suspected that using a disinfectant on a wound would slow down the healing process. (I do not know whether this is correct, but I did not have any inflammation, so I suspect I was at least not completely wrong.)
It took roughly 11 days until the flap of "meat" grew back onto the rest of the finger. It did not really look like the flap was growing back into the main body of the finger, for all 11 days. Then, suddenly, the skin from on top of the flap has fallen away after ossifying. And during just a single day the flap seemingly completely amalgamated with the rest of the meat.
I will try to not forget to write down when exactly the area becomes completely as before, and all the traces of the cut disappear.
The reddish spot is still visible, but not painful any more.
In 2020, in Shanghai, autumn stated to get really felt into the air at the 7th of November. This was the first day when it really felt required to wear a warm/winter coat.
Second time had to wear a warm coat. A very warm autumn.
We are still wearing warm clothes. Shame I didn't write down the first time light clothing was enough. A couple of weeks ago.
I think the winter coat will not be needed for this year any more.
That makes it 4 months of cold weather.
I think cold weather came to stay. It is 7° now.
A nice black tea with an almost metallic taste that I really like.
I ordered it in a tea house near my home in Wuhan.
The first portion smells a bit like honey.
The rest are weaker, but very refreshing.
A really nice tea from Hainan.
I have read “Professional CMake” by Craig Scott. This is short review. This time the review is, indeed, going to be short, because I cannot really say that the book was a paradigm-changing read. Nevertheless, I have decided to make reviews on most books I read, and this one is not an exception.
The necessity to learn cmake came to me when I changed my language of work from some other popular engineering language, to C++. At the new work place, the project build system was not perfectly organized, and I volunteered to update it to modern cmake.
Naturally, I searched for some homework reading about cmake, and although official documentation existed, I still wanted something more narrative. (Indeed, this happened to be a fairly standard approach of mine, to learning new areas.)
Craig Scott’s book is quite famous, easily google-able, and has positive reviews. Why not, I thought.
The book can be roughly partitioned into three parts.
The first part gives the reader an general overview of the tool, describes several most common usage patterns, provides a short historical notice.
The second part describes the main area of cmake’s expertise – building projects, that is assembling a pile of C++ (mostly C++, although other C-related languages are supported), into a working system.
The third part, on top of the second one, provides an overview of things that cmake can do, but which are less known. For example, cmake has a built-in tool for testing, CTest, and a tightly related testing results dashboard, CDash.
At the first glance it may seem that the book is merely restating facts from the official documentation. Unfortunately, that is true to a large extent.
Obviously, it is hardly possible to make a technical book about a certain artefact (a piece of software), entirely without referring to the official documentation, but I feel that “Professional CMake” is overdoing it.
Compared to, say, “Autotools: a Practitioner’s Guide”, the amount of excerpts from the official documentation is much larger.
However, this is not what we are usually reading technical books for. We are reading them to get a working intuition – “how to use what you know and find out what you do not know”.
On the other hand, obviously realising this necessity, Professional Cmake is equipped with “Recommended Practices” sections at the end of each chapter. Those are purely opinionated, and are, supposedly, aimed at that intuition development.
The author obviously likes cmake. He praises its policy system, which allows compatibility feature tweaking, and the ease of adding functions to the code base, partly to its good readability.
I, however, am much more a supporter of the idea that a good language is not made by piling a feature on top of a feature. Obviously, any build system cannot be done without certain idiosyncratic components, catering to particular systems, and dirty hacks. And in my opinion cmake overdoes it. Perhaps, not as much as GNU Automake does, but the effort spent on writing heuristics could have been spend rather on improving the cmake language itself.
Yes, you haven’t misread it – cmake is, first and foremost, a programming language. Yet another language introduced into the C/C++ ecosystem.
Do we need another language in the toolbox? Okay, it may be said that POSIX Shell, and especially how it is used in Autotools is a horrible language. However, cmake’s own language is also far from being robust and semantically consistent. Even having lists delimited by semicolons is a queer idea.
But what is more of that – and what is perhaps, the biggest drawback of the book, the author does not seem to realise those drawbacks, and speaks very little of them. A huge miss for someone trying to create a professional book!
To me personally the most disappointing was the fact that, seemingly, the author does not really understand not just how macros are different from functions, but what they even exist for.
“Professional CMake” does the job. If you need a not-so-quick introduction into cmake, which would be more in-depth than a randomly googled howto, but less heavyweight than crunching the original documentation, read it. It is not a bad choice. You are also less likely to write “old, hard to maintain”-style cmake code if you follow the book’s tutorials.
Just do not expect that this book will make you a CMake expert, or even warn you about the actually hard cmake trickery.
Subscribe and donate if you find anything in this blog and/or other pages useful. Repost, share and discuss, feedback helps me become better.
I also have:
I have gone through the book in the title. I am no saying “have read”, because for quite some time already, since first grasping this trick while conquering Baudrillard’s “Simulacra”, I am getting through a lot of material via listening to Text-to-Speech, rather than reading directly. Not just is it very handy when there is a “sunken” time during which it is not possible to execute tasks that require full concentration. This, however, comes at a price, namely, certain details are perceived differently, compared to reading in a traditional way.
Anyway, I have listened through the book, and it was an enlightening experience, which I would like to share with the world in this short review.
The book tells a story of an American couple with two children living in Shanghai for a few years, who’s elder son had a chance to attend classes in a Chinese kindergarten (as opposed to what typically happens to the expat children in China – they attend foreign-style kindergartens, which are especially plentiful in Shanghai).
The book was especially interesting to me, as I am not an original product of either American, or Chinese culture, so I had a chance to see the story from a third-party perspective.
I have to say in advance, I have read this book in Russian, rather than in the original English. I generally read in English quite a lot, however, I didn’t bother finding an English version when I had already found a Russian one I don’t remember where from. (It was on my reading list for quite a while.)
Mrs Chu herself is, or at least used to be, (2017) the chief of an American non-profit organisation Shanghai office. She is a child of Chinese migrants to the United States, and hence had an opportunity to experience both American and Chinese influence in her upbringing.
She is also a member of the American council on relations with China: https://www.ncuscr.org/program/public-intellectuals-program/PIP-VI-fellows/lenora-chu
This immediately made me wonder where is the Russian Council on Russia-China relations. Of course, there is a Russian Council on Foreign Affairs, but it is hardly a substitute.
The book starts with her choosing a kindergarten for her kid. She is contemplating between a foreign-style kindergarten and an authentic Chinese one.
Obviously, curiosity wins over, and she arranges him joining “the most sophisticated Chinese kindergarten in Shanghai”, supposedly, the one which is attended by the cream of the crop of the Shanghainese elite.
As grown-ups, we all know how important is the environment in which we are spending our time. Even if we are not learning anything, or deliberately changing ourselves in some planned way, the surrounding conditions influence us. And even if your child is investing strenuous effort into not learning anything at all from the teachers he is made to obey (which is unpredictable, and beyond the parent’s influence), the environment is going to have its effect regardless, and thus choosing a school is, perhaps, the most powerful way to shape your kid’s future, second only the effect of your own behaviour, which the child is bound to imbibe just due to being your offspring.
It is often noticed (at least in Russia), that the most prestigious and competitive universities are, surprisingly, not the ones which have the best programmes or best teachers, but rather the ones which are chosen by the elites as schools for their children. Needless to say, the strongest weak connections, and those that are the most likely to help you as you progress into your life, are the ones created while attending a University.
It is questionable, however, whether this paradigm applies to kindergartens as well, rather than existing as a projection of an adult way of behaving onto little children. Personally, I only remember three people I have attended a kindergarten together with, and keep in touch with none of them.
However, this projection of adult fears onto the children’s lives goes through the whole book as a guiding light, and appears in so many places that it could, perhaps, even be promoted to a second title of the book.
The book is roughly consisting of two parts: the events in the life of Lenora’s child, and how these events are actually particular cases of general trends in the Chinese society.
Competitiveness, poverty, fear of the future, rapid changes, foreign influence, corruption, migration – each of these things occupies a separate chapter in the book, and each of these chapters is started with a sketch of a situation happening with Lenora’s kid.
She has done a great deal of background research while writing the book – both studying the existing body of knowledge, books, newspapers, and doing fieldwork – interviewing people, studying real events, and engaging in social mechanisms (such as parents’ groups).
The language she is using is very vivid and clear. I could almost effortlessly imagine the scenes of events happening in the book. Granted, I have seen quite a lot of China, and especially Shanghai, so I probably can understand the situations better than an average foreigner reading about events so distant.
China is a very competitive society.
When I was a kid, it was widely believed by people around me that the USA is the best example of a competitive society, and that Americans do not have the concept of friendship at all. In general, there were a few rumors about American life, widely circulating in the society, that were deemed to illustrate the degree to which Americans are cold-blooded, self-centered, merciless people completely devoid of spirituality, that are going to trade you in for a tiny benefit for themselves.
Those rumors included:
Can you imagine my surprise when I experienced almost all of that in China after I had come here for work? (The largest item missing from this list is tips. Tips are not used in China, and thank God for that. Tips are a complete insanity.)
However, Mrs Chu, who can compare the competitiveness of the two societies first-hand, is surprised by the amount competitiveness too.
She acknowledges that American society is fairly
competitive, but Chinese level is surprising even for her.
She even gives a plausible explanation – Chinese society has grown significantly and very quickly in the last generation’s lifetime. People also largely attribute this growth to the education system, and hence want to ensure a better future for their children, and keep pushing them to achieve.
The problem here is that, even though the country has grown, not all industries have grown at the same pace. Naturally, some have a physical limitation on them. Education cannot grow faster than retail shopping.
Perhaps, this is one of the reasons that this competition starts even at the kindergarten level. Every little bit helps.
She mentions that the Chinese government acknowledges this problem, and is trying to deal with it using a top-down approach, but so far all the attempts to curb the tsunami have largely been limited in success.
This unbearable competition imposes a colossal strain on human conscience, and, naturally people are imperfect, and they start to cheat. In fact, they cheat on a massive scale.
She spends a large amount of effort do describe the amount of corruption permeating Chinese society.
Everyone pays for everything to everyone. She, first and foremost, sees corruption at school, but it is far from the only area of life soaked in corruption.
(Here I have to interrupt myself and refer to another book broadly attributed to the “Chinese domain” in my reading – the “China’s Gilded Age”, by Yuen Yuen Ang.)
What cannot be expressed in raw cash is expressed in services, access to people and gifts.
She spends a large amount of ink describing Chinese obsession with Western brands of clothing and accessories. I couldn’t prevent myself from seeing Western arrogance in her attitude.
For me it was not surprising at all to see the people with less exposure to marketing and advertisement to be much more pliable to the magic of the brand names and the prestige of the price. “The Westerners” do it this way → “The Westerners” are successful → We need to do it this way, it is the right way.
It is, perhaps, ironic that people who are so skillful at cheating are, at the same time, so vulnerable to cheating themselves. And it is also ironic that Chu does not see the whole concept of branding as cheating, although it clearly is. “Apple” products are not called “Foxconn”, although that would have been earnest.
The books does not bear the word “Soldiers” in its title for nothing.
She uses “soldiers” as a metaphor to describe the atmosphere in the kindergarten her son was visiting, and broadly to describe the atmosphere in the educational institutions in general.
This was, perhaps, the thing from the book, I could relate with to the greatest extent.
Russian school culture is also borrowing a lot from the army culture. Maybe this is a thing that the Chinese have borrowed from the USSR, and considered useful. Or, maybe, it penetrated Russian and Chinese cultures independently, coming from Germany (Prussia? Bismarck?). Is there even a realistic way to build a nation without resorting to some sort of military surrogates? (Even democracies have Scouts and invest a lot in sport.)
At Alexander Auzan’s lectures on institutional economics, I heard “School, Prison, and Army build nations”. I kept recalling this phrase while reading the book.
I am entirely ignorant about Chinese prisons; (Although for the interested, there is a Magazeta podcast about that.), but the schools and the army certainly do their share in what is modern China.
Mrs Chu believes, that Chinese militarism disappears as swiftly as it appears, when the “Little Soldiers” grow up and start understanding a little more about the world around them. However, the pictures of her boy marching along the flat corridor were very vivid, and we should not forget that she went to the most prestigious kindergarten in Shanghai.
I remember that in Russia we also had schools that were particularly keen on “militaristic patriotism”, and one of my schools was “leaning towards” that.
China seems to suffer from overpopulation in “Little Soldiers”. In fact, quite a lot of grievances outlined in the book she attributes to overpopulation.
People have to become migrant workers in big cities, due to the lack of opportunities at home. Women have to become prostitutes, for similar reasons.
Corruption is largely due to the overpopulation and lack of teachers. And the overwhelming reverence before teachers also comes from the fact that teachers are just too few.
And while I have seen quite a lot of it with my own eyes, sometimes I wonder which is the cause, and which is the effect. After all, China comparable in both size and budget with Europe if taken as a whole.
However, I am feeling that this is not all of the picture. After all, people are not as eager to have children nowadays, as they used to be in the past. Moreover, with such a giant business opportunity, it is strange that business has not found a solution.
In any case, the love to Shanghai is permeating the book from the first page to the last. And I completely understand! At least for me, Shanghai was a place that I fell in love with very quickly.
And, honestly, I am quite happy that there are other people who are feeling the same.
Although I have not actually managed to recognise the school that played such a major role in the plot, I have been in contact with a bit of the Chinese education system, and I can very well relate to what she is writing about.
And even so, Shanghai is a lovely place, which is full of marvels, and unexpected discoveries.
Thank you very much, Mrs Chu, for bringing this up once again.
This review happened to be written in a much more disorganised way than my previous ones. This is a little strange, since this book is supposed to be much easier than most of the previous ones I have written the review for. It is not advanced tech, and not even rigorous science, and was, frankly, an easy listen. (Yes, I have listened to it using the Text-To-Speech machinery, as I do quite a lot of my books.)
Do I recommend reading it? Definitely! It should take no more than a couple of evenings, if reading from paper.
It would be a ton of fun for anyone who is into China, into Shanghai, into education, or is just raising his own child.
Shall it be included in the High School Curriculum? I guess, my life could have gone a different way if I had read this book as a school student. Hence, it is very unlikely that anybody is going to let such a book be studied at school. After all, teachers are depicted there as ordinary human beings, and that is one offence a school would never forgive. But, if you are a high-school student, and you happen to be passing by this review, by all means, taste it. You will get plenty of examples to taunt your teachers and become their chief nuisance.
For everyone interested in China, this is also a really nice exposition into the “real” China.
Subscribe and donate if you find anything in this blog and/or other pages useful. Repost, share and discuss, feedback helps me become better.
I also have:
Android storage is really mind-boggling.
Background reading: https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt
https://android.stackexchange.com/questions/210139/what-is-the-u-everybody-uid/210159#210159
Selinux?
https://android.stackexchange.com/questions/210139/what-is-the-u-everybody-uid/
Runtime permissions
https://source.android.com/devices/storage#runtime_permissions
Namespaces
https://developer.android.com/training/data-storage#scoped-storage
This is a kind of “Registry” for system properties. “getprop” -T -Z
They have something like selinux attributes attached to them (crazy!), so you need to understand SELinux first, before doing anything useful with them.
I have not found a corresponding setprop, however, there exists settings
This is something like what Android has instead of init.d (why would you reinvent the wheel again?). Magisk can plug into this file without making the system go insane, but I guess it is better to use Magisk’s services anyway.
/data/system/packages.list
Some command for activities/services management.
For example, you can restart “Airplane Mode” by typing:
# !/system/bin/ sh settings put global airplane_mode_on 1 sleep 1 am broadcast -a android.intent.action.AIRPLANE_MODE sleep 1 settings put global airplane_mode_on 0 sleep 1 am broadcast -a android.intent.action.AIRPLANE_MODE
This memo contains a checklist that I created during preparation for a conference. It is by no means exhaustive, necessary or sufficient.
You need to calibrate the parameter X. For me X=1, but if you are smart this may be successful with X>1.
The following sub-items are to be performed for each of the X papers.
You need to calibrate the Y. I think that 4 is a realistic number. Add one well-read paper from the previous headline. For a three-day conference this makes 15 presentations. Already a lot.
A non-exhaustive list:
As a homework, paste your Twitter, GitLab, account links, and other Social Media links at the top of the file.
Make a photo with them. It is an easy way to remember what they look like and add to your BBDB.
Write it down into the BBDB.
How is this even done at the video conferences? “Private rooms” just sound creepy. At chat-based conferences?
Sometimes I read articles and I need to put my thoughts somewhere. Why not here?
Host: Robert Wiblin Guest: Spencer Greenberg
PhD in Applied Mathematics, develops software for mental well-being, founder of ClearerThinking and EffectiveAltruism.
A robot for psychological counselling.
That is what my psychologist was telling me about.
That is when you associate something bad with something good. Spencer is doing workouts while watching TV shows.
Associate certain things with happy thoughts.
"Tea exists in the world, it's such a wonderful thing!"
Are something we value for themselves. This is opposed to instrumental values, which are mostly valuable for the effect they make.
Like that job offer by Biden's administration, that they have put on the White House website.
That is almost like UNIX security: oga.
That helps people work together even if they do not know each other.
A huge number of punishers!
Pursue the value without actually getting an intrinsic value out of it. Particular case: you associate something with an intrinsic value because it used to be associated. Particular case: you had a false belief that it was your value. Example: not having enough money reduced your autonomy, and you keep searching for money since.
When you are learning bullshit.
Diminishing returns on scaling own's emotions for larger scale. One person suffers: bad, ten people suffer: worse, million people suffer: ah, so bad, but you cannot scale your emotion.
Algorithm: questioning the goal "why do I want it?".
Because A, B, C.
Is there another plan that brings you A, B, C?
This is what makes you find those A, B, C, that are leaves in your value system.
It may mislead you from your true intrinsic values.
It's a bit hard of a book, although it does leave a bit of hope.
What I remember on the next day?
It's a very short book, actually. You can read it in an hour or two.
What is promising is that if you want to keep being young, you can do that for quite a long time.
Apart from that, most of the ideas I have already seen.
Maybe one of the thoughts that I should still keep in mind is that life is still as cruel as before.
https://www.wired.com/1998/08/jini/
# | Word | Meaning |
---|---|---|
1 | ruefully | sorrowfully, regretfully |
2 | riotously | wildly, aggressively |
3 | logjam | a jam of logs at a river |
4 | cobbled | покрытый брусчаткой |
5 | hurtling | rushing violently |
6 | hole up | hide out |
7 | skunkworks | an experimental lab |
8 | slipstream | underwater current from an engine |
9 | unfurl | unroll, unfold |
The Net made it possible. Java made it doable. Jini might just make it happen. An on-the-fly, plug-and-work, global nervous system that connects his cam to her RAM to your PDA.
Interesting, the first PDA I have seen was in 2003, AFAIR. The article is from 1998, and they already have them. And they are already thinking about the "Cloud".
I have never thought that McLuhan was behind all that cyberpunk idea. I need to check on him.
1998, and they already had that "Silicon Valley Demo" idea.
They already had flat-screen monitors in 1998!
Storage is philosophically important in computing. Laymen do not understand, or understand very roughly, the idea of storage, and why a picture weights more than text.
So, their assumption about the "ease" of something is already biased. What we have come to now, in 2021, is to try and make people forget about "storage" altogether. People do not understand how much "Storage", say, Telegram eats every minute.
What Java aims to do for software - be a lingua franca - Jini hopes to do for the machines that run it: provide an overarching, universal platform - a distributed operating system, in effect, on which devices of every description can meet. "Jini is the next chapter in the Java story," reads another project mantra.
The release name is still being debated, but the marketing plan is not: It will reprise the same strategy that fueled the explosive take-offs of both the World Wide Web and Java - essentially, give it away. "There's one thing we've all learned from watching Java and the Net," says Mike Clary, Joy's key colleague in Aspen and Jini's overall project manager. "This can only be a ubiquity play."
Aspen is a small city in Colorado. It is 1800 km away from Silicon Valley.
Doesn't make sense. Why would you go to a town of 7000 people?
They probably mean something like a Victorian mansion.
Jini is a set of new software layers that together create an overarching "federation" of computer devices and services.
On top is a directory service, based on a "lookup" mechanism that allows different Jini-enabled devices and applications to register and be seen on the network. The next-level service is persistence, provided by JavaSpaces technology, which stores objects so that other users or applications can retrieve them. Below that, a set of protocols based on Java's Remote Method Invocation enables objects to communicate and pass each other code. And finally a boot, join, and discover protocol allows Jini-compatible devices, users, and applications to announce themselves to the network and register in a directory.
Isn't that kinda "serverless"?
https://www.wired.com/2000/04/joy-2/
# | word | meaning |
---|---|---|
1 | jaded | state of disillusionment and sadness |
2 | isthmus | a narrow strip of land between 2 landmasses |
3 | concomitantly | simultaneously |
4 | placidly | calmly |
5 | precipice | a very tall cliff (metaphorically) |
6 | bode well | To seem indicative of a favorable outcome |
7 | trot out | Bring out and show for inspection and admiration |
That guy that wrote about the "technological singularity".
The last one seems to have been in 2009. http://www.telecosmconference.com/ George Gilder and Steve Forbes were the main drivers.
Gilder is that guy: https://threefounderspublishing.com/our-editors
Forbes is the Forbes Magazine founder and Editor-In-Chief. (Yes, that very Forbes.)
Those venture capitalists seemed to be very enthusiastic about "Technological Singularity" or some of that kind of stuff.
Seriously? Searle was invited to that kind of conference? Unbelievable.
Effectively happening now. Generally, people are very eagerly rejecting consciousness, offloading it to the machines wherever possible.
So, the scarecrow of "antibiotic-resistant bacteria" was already present in 2000. So, 20 years have passed, where are those nitrocharged bacteria that you are warning about, Dr Gelfand?
Hm… Facebook censorship argument.
Still exists. https://longnow.org/
Still alive, works for the USA Defence Council. https://en.wikipedia.org/wiki/Danny_Hillis Made https://en.wikipedia.org/wiki/Connection_Machine at https://en.wikipedia.org/wiki/Thinking_Machines_Corporation.
They had kind of a dream-team of scientists working for them. But failed. Later became Sun Microsystems, which itself failed.
This is interesting and philosophically inspiring. Machines are not the thing that gives us new stuff, it is the thing that checks our ideas for correctness. The new paragon of truth.
If you can explain it to a machine, your idea is correct.
Aha! The Golden Rice has been the case in 2000 already.
Still exists.
The Foresight Institute is a Palo Alto, California-based research non-profit that promotes the development of nanotechnology and other emerging technologies. The institute holds conferences on molecular nanotechnology and awards yearly prizes for developments in the field.
Did it?
We do have sequencing chips now, but they are the only bio-electronic thing I am aware of that is on the market.
Darwinian outcompetition! Fun!
That is, indeed, happening!
"At the dawn of societies, men saw their passage on Earth as nothing more than a labyrinth of pain, at the end of which stood a door leading, via their death, to the company of gods and to Eternity. With the Hebrews and then the Greeks, some men dared free themselves from theological demands and dream of an ideal City where Liberty would flourish. Others, noting the evolution of the market society, understood that the liberty of some would entail the alienation of others, and they sought Equality." Jacques helped me understand how these three different utopian goals exist in tension in our society today. He goes on to describe a fourth utopia, Fraternity, whose foundation is altruism. Fraternity alone associates individual happiness with the happiness of others, affording the promise of self-sustainment.
Fraternity is, obviously, a mirage.
But many other people who know about the dangers still seem strangely silent. When pressed, they trot out the "this is nothing new" riposte—as if awareness of what could happen is response enough. They tell me, There are universities filled with bioethicists who study this stuff all day long. They say, All this has been written about before, and by experts. They complain, Your worries and your arguments are already old hat. I don't know where these people hide their fear. As an architect of complex systems I enter this arena as a generalist. But should this diminish my concerns? I am aware of how much has been written about, talked about, and lectured about so authoritatively. But does this mean it has reached people? Does this mean we can discount the dangers before us? Knowing is not a rationale for not acting. Can we doubt that knowledge has become a weapon we wield against ourselves?
So, he had that fear of the unknown even at that time. Interesting. I used to think that the Golden Age was the age of overwhelming optimism. I was wrong.
A von Neumann probe is a spacecraft capable of replicating itself.
About 10% through reading Tractatus, I realised that many statements that Wittgenstein is making have a lot of sense if interpreted in a programming context.
This file is the orgification of his book.
From the very beginning, from the first footnote, it stroke me as extremely fitting into an "org-like" format of a tree of thoughts. I decided to convert it into an org-mode file, with every thought represented by a heading, and every leaf being my comment on what this leaf actually means. All Wittgenstein's thoughts are represented as headings, and mine are contained within bodies. All my thoughts follow the "one sentence – one line" rule. Every Wittgenstein's thought is at a single line, but his thoughts may consist of several sentences.
Knowing the Lisp's saying "top level is hopeless", I took the liberty of adding this explanatory comment at the top level, in a hope that if someone is going to parse this file with an automated parser, this comment would be easier to bypass.
This file is based on the 1974 edition. Tractatus was originally written in German, in 1921, and translated into English in 1922.
References:
What is "the world"? And what is "the case"? Also, what is "all"?
Okay, the simple thing is: "the world" here is a "logical world", that is something that exists in a computer that processes this "something". Let us throw away a bit of idealism and admit: people are stupid and forgetful, and philosophy is a loose field of research. If we want to make any sense of this book, it way more applies to computers, than to people.
Hence, I should probably be saying "memory" instead of "the world". (Maybe, "storage" would be even better, but let's get back a bit of our idealism and imagine a computer with fast storage.)
"Facts" are, therefore, what we nowadays call "data" (plural).
A computer cannot get out of it's memory. No matter how and what you program, from the programming perspective it is only the state of memory that is changing.
Moreover, if we state that the memory (or at least some part of the memory) is immutable and large enough to have everything that we would possibly ever be asking from sensors, then we can remove the sensors from our logical system entirely, without loss of generality. That is especially true if we do not allow Random Access Memory, but make the machine move the reading head at finite speed.
So, since we have some stuff written at our machine's memory by default, and no more "external" data can be added, then everything that can be computed, must be computed from the existing data.
Important 1:: we can, obviously (not obviously at all!), generate some random garbage and write it into our memory. And using this random garbage, we can compute (predict) everything. However, we the programmers (logicians) are usually interested in those computation results (logical inferences) that actually make some sense, not random garbage.
So the computation we are making must not contradict what is already in the memory to be useful.
Important 2: maybe we cannot even generate randomness, can we? We can use a PRNG to generate pseudo-random bits, but a PRNG needs a seed, and the only place we can take this seed from is, again, the initial state of the memory.
I do not understand this. Is it a repetition of the premise that there is no IO, without loss of generality?
Also, I very strongly feel that I should somehow connect this thought with the "open world" and "closed world" metaphor in inference engines (such as Prolog and SQL), but I'm too ignorant for that.
Memory consists of cells.
Cells are bits, and can be either 1 or 0.
(In programming we usually use bytes or words as a minimal elementary operating unit, but perhaps bits can also work as a substrate.)
I do not understand. Does he mean that ones in memory should correspond to "true" things in the real world?
So, this "state of affairs" is that initial memory state that somehow describes the "real world" whatever that be. I am not very sure, but it seems to me that the actual thought here is that this "input" should be partition-able into pieces of input describing various things in the world. (Not sure whether this partition-ability is obvious.)
Does it mean that the input (world) should not be self-contradictory? For example, input should not contain both x=1, and x=2 in whatever kitchen arithmetic we may reason.
I think this means that "types exist". Wow, that's a grandiose statement, isn't it?
Maybe, it, rather means that "bytes do not mean anything by themselves, but only when there is some human understanding of what these bytes represent. This still implies "types", but in a less mechanistic way.
What does it even mean "a situation would fit a thing"?
Okay, I am not all sure in what I am writing here, but here is my view on this:
Some things are in some state, some other things are in some other state. Firstly, you should be able to just append an encoding of a state of some thing to the end of the input. Because why not?
Assume types exist. Then instead of doing a computation on objects, you can do a computation on types, and create a mapping from a set of all possible inputs (matching your expected types) to a full set of all possible outputs. For "computation" (narrowly understood) that is probably not feasible, as it would blow up exponentially with every input variable, but for "logic" as the science of "all truths", computational inefficiency should not matter.
Make a pull request with your own understanding of what this means.
possible
situations, but this form of independence is a form of connexion with states of affairs, a form of dependence. (It is impossible for words to appear in two different rôles: by themselves, and in propositions.) "Dependent" is that they appear as input to the same program and it would just make sense for them to be used together?
Does Wittgenstein use "propositions" to mean "functions"? If yes, then this would mean that functions are not the same thing as variables, right?
Or, rather, reasoning about types one have to give a name to a type, and this name cannot be used for a function at the same time.
I think this, again, means that, essentially, "types exist". That is, a C++ Point{int,int}, can only be what I have described, not Point{int,int,int}.
This I do not understand. What are external and internal properties? If by "internal properties", Wittgenstein means "state", or even "initial state", then this is reasonable.
And "external" means… computable? Like, we do not need to know that P(a) = 4, but we need to know what a is to work with it.
Again, the idea seems to be that we can compute with types instead of instances.
Again, the input can be empty. But we cannot compute without input.
Again, this seems to mean that we describe things with parameters.
Well, if the only thing we have is the input string, then we have nowhere else to draw information.
It's a bit confusing, but I feel that what he actually means is the following:
I do not understand.
This is kinda again, about defining complex types from primitive types. I guess, Wittgenstein means that there should be a set of primitive values that have primitive operations working on them (and calls "modern primitive objects" objects).
Here we would have to consider bit-wise operations. It seems that in modern programming we sometimes can extract pieces of primitive objects.
Hm… suppose our function has no input. Is it the same as "no substance"?
Well, again, if your function has no input and no state, then it can only be constant, right? (Perhaps, a constant trajectory.)
Interesting point. I do not understand it. Let us image that our function implements a general-purpose programming language. The it would still eventually have to be reduced to the primitives of the first function. Right?
So that is still bits?
This seems like a re-iteration of the fact that numbers do not mean anything by themselves, only in conjunction with their interpretation.
Does it mean that "bits mean nothing"?
So (255,0,0) can mean a red colour, or a coordinate of a 3d point.
Is this "type punning"? If things are described with the same set of parameters, then reading the memory cannot tell you which of the things was actually meant. If you add an explicit type qualifier, it is kinda "a parameter", isn't it?
Bits do not know anything about "reality".
So… bits are the only thing that really exists.
Or a concrete juxtaposition of bit values.
I do not understand. Well, let us assume that we can partition the input into "objects". Then, unless we can re-partition the input, the objects keep being objects.
Like, the immutable input string.
Seems naturally following from the above.
That is input.
I do not understand. Does it mean that the input is sequential?
Again, if you partition the input into arguments, their order is the only thing that defines which one is which, because otherwise they are just bits.
That is "a function prototype", isn't it?
Unclear.
I think that by a "fact" he means "what is true". A "structure of truth". It is a bit hard to explain.
Perhaps we can see this as saying: the world is the evolution of the memory. Things that logic (or machine) can compute at the same time, will be in the memory.
Again, we only have the input as the thing that provides knowledge.
Or which inputs are prohibited.
We can compute that the input we are given is meaningless or contradictory. This is our "negative fact". Or, we can infer that a certain situation can never possibly occur.
Because they represent different, mutually exclusive, machine states.
Because they are inputs, right?
Or, rather, everything that is computable?
Isn't that again, that all logic needs an interpreter?
Is that a "state of memory" again?
That's just it? What am I missing here?
This seems to be, again about a mapping between reality and computing.
So, the "picture" is expected to me a representation of the "form"?
What is "it" here? A picture or a fact?
Seems, again to stress that the representation is non-ideal.
All, right. So, there is what we would have called a "canonical pictorial form" of reality, that represents it exhaustively. And "our picture", non-ideal, must have the same skeleton as the "canonical picture".
Perhaps I was completely wrong about the previous point. It seems we are speaking again about the types and data structures.
Like, this sounds like an "area of applicability", "area of effect", or "domain". A picture, indeed, cannot display "a picture".
The "representational form" here seems to be "the way representation is designed". For example, a picture may consist of a grid of pixels.
Correctness and incorrectness here seem under-specified. There may be pictures that cannot be represented by a grid of pixels.
On the other hand, a potentially valid representation may be "just wrong". For example, if an apple is represented as an image of a plum.
A grid of pixels cannot represent something that is not representable as a grid of pixels.
So… this… "grid of pixels" is, a "logical form". Perhaps, since you can express pixels as bits.
Seems a tautological statement, or… like… the thing that is in essence a "universal Turing machine".
Yeah, since we can eventually interpret everything as bits and predicates.
Really? Well, but how well?
I guess, bits should be similar to bits. But what about different data structures?
And the existence and non-existence are the primary question of logic. Basically, everything can be reduced to logic.
A configuration of bits.
Since it is given.
What about imprecise representations?
Does it mean, that we can, basically, draw pictures of imaginary things?
What is "sense"? This seems one of the "leaves" that deals with inexplicable.
Em… is it like, we are drawing something that is not correctly representing reality… And if it represents it incorrectly, we say that it is false?
Great! How?
Sure, you can generate a configuration of bits at random.
Again, because you can generate everything.
This seems to mean that the only thing that is in order to even get in contact with the facts, we need to picture them in a logical substrate (express in bits).
I think that some states of affairs are not thinkable. But those that are thinkable, we should be able to express as our internal logical language.
Maybe incomplete, right?
Seems that the definition of "possible" is strange here. I guess, the axioms may be strange.
Well, if logic is "the driving force" of thinking, it has to be like this…
Because, supposedly, a computing brain is driven by logic and cannot transcend it.
This raises a question whether we can say anything about uncomputable numbers.
That is a "comment", right? So, "logical" does not necessarily mean "true". May be false, but still logical.
But they are not, aren't they? On the other hand, perhaps, it could be possible to devise a logical system which would not be able to express false statements?
That's, like, the laws of your logical system? Or, perhaps, functions without input correspond to "apriori truths"?
A thought is a "logical picture of facts". Proposition, here, perhaps, means "something that we can compare with the senses".
Apparently, the language of bits (logic) is more general that specific sub-languages that describe measurable things. We project therefore.
So, we express a proposition, create some configuration of bits, or maybe even an image of the radiant bit grid, and the relation between a thought and this image is "projective relation".
A proposition is a perceptible projection of a thought.
Are "words" here real words, or formal combinations of letters? Are data structures words? How is a "determinate relation" formalised?
How exactly? Are there rules to this articulation?
I do not understand. So, there is some substance?
So, basically, we express our propositions in a language, that is "physical", and bits are encoded in memory, however, what we actually mean are logical propositions.
We would still have to assign a meaning to the objects, right?
Because a proposition is a projection of a fact.
What are "situations"? It seems that giving a name to a set is fine. Why not?
So we are writing this "proposition" to represent a thought (a logical picture of facts) in some way. With words, I presume.
If a proposition is properly written, so that "words", or "simple signs" correspond to reality, then is it "completely analysed"?
Or, let's try to think about it from another angle: If you have a function, an ⍺-abstraction of something, and you do an "analysis" step, that is you resolve all scheme symbols in a form and obtain their memory values, then you are also getting something "completely analysed".
In Scheme do "names" correspond to "symbols" at an analysis stage?
So, "symbols" are unique. Do they evaluate to themselves?
Necessarily? By construction? Doesn't seem to be the case, although if you are only considering exact correspondences between logic and Scheme, it's probably true.
So, a procedure.
That seems arguable. I should be able to "serialise" objects.
Propositions, functions, evidently, predict behaviour, not describe things.
Well, otherwise a program is broken. But what does it mean "possible"? Resolve to themselves or resolve to values?
So a definition is kinda like in Scheme. You use the special definition word to make a simple symbol 'symbol resolve to a value, maybe complex.
If our forms use a symbol with an undefined value, then what? This symbol resolves to false? Seems so.
What is a prototype here?
Because we are, again, just substituting symbols?
So, a proposition is a "formal construction"?
Like, this is, perhaps, that place where "symbols evaluate to themselves".
So, I cannot implement a function that behaves in an exactly the same way as a built-in?
Names, indeed, cannot be broken into pieces, because a definition is only an association.
I think what we can understand "application" here in the same way we understand an application in Scheme. Indeed, not everything is clear from a function definition, but running a function should give us all information about it.
Or not? At least, we cannot solve the halting problem like this. But logic cannot either.
We usually explain primitives in English. In the Scheme Report, the primitives are defined though something called "Denotational Semantics".
I guess, what he really says here is that if we know a "meaning of a function", or its truth-table, we could, potentially, infer the meaning of the primitives, perhaps in a way similar to solving an equation.
I guess, we can have propositions that are very simple? Such as "resolving a value of a name"?
So… expressions are like meaningful sub-units of propositions. There maybe other things in a proposition, but those are not related to its meaning. For example, there may be debugging or type annotations.
But how do "form and content" come together here?
A class of "propositions using this expression"? Perhaps, but what would it give us?
Inlining for speed?
Totally confusing? What is a "general form of the propositions"?
Is it even possible to devise a non-trivial "general form of propositions using an expression A"?
Does the phrase "everything else variable" make this clause useless?
Okay, in "high-school logic", a propositional variable is a variable that can be true or false. I guess, you can say that a variable becomes true if an corresponding expression evaluates to true.
It seems that Wittgenstein dislikes REPLs. Or that "top-level is not very well defined".
Otherwise, it seems that, indeed, all "expressions" can be assigned to variables. (Note, however, that Scheme allows for syntactic expressions.)
"All variables can be construed as propositional variables (even variable names)", I guess, means that (=? (symbol->string 'variable) "variable") => #t
Hm… This "logical prototype" seems to be a function that has no constants? It can have calls to other functions or values, but those can only be supplied as arguments…
Are this, like… pure functions?
Or, is the idea in that a function with no constants is actually equivalent to a type?
Seems, again, that now we are defining a "type" for a variable (as opposed to a function (proposition)).
Seems like… So, here we are defining a variable (a data structure), via a set of accessors to this data structure. The "meaning" here is really the "implementation" of the data structure and the accessors.
I should look up what Frege and Russel did.
But it is quite reasonable that you can make expressions by combining more primitive expressions.
So, in Scheme symbols stand for stuff that can be resolved to values (maybe undefined), and can be compared in O(1).
In Lisp in general, people are even expected to speak directly with symbols in runtime.
I guess, what Wittgenstein is saying here, is that we do not really need a disjoint type for symbols in programming languages. (lookup-variable-value) would be designed to work with strings, not symbols.
Identifiers shadow other identifiers. That is the same thing can have different values in different contexts.
Same thing. Using the same variable name for different values in different contexts does not make those values in any sense related.
So, Wittgenstein is suggesting to have a separate obarray for each context? Hmm, no, doesn't seem to be the case. On the other hand, "Green" and "green" seem to resolve to two different values in the same sentence.
On the other hand, maybe it is just that the Sign and a Symbol are reversed?
Programming as well! That is why we sometimes think that static types should save us. With static types at least the type of resolved variables should match.
This seems to be a prescriptive clause.
Ok, my thought. The difficulty with human languages is that we are ready to search different obarrays for variable resolution. And if an obarray fits the propositional structure, we start resolving most variables from that obarray.
People seem to have several environments of evaluation of an incoming proposition. The correct obarray is chosen if all variables of a clause are resolvable from this environment. If all variables are resolvable from two environments, it's either a tautology, or a pun.
So, for clarity, we should stick to using a single obarray.
Upd: this is the first place where he speaks about "logical syntax", not even defining it properly.
That seems to be it! That is the ambiguity resolution heuristic. One variable alone may be resolvable from environment-1, but we need all of them resolve.
So, programs may consist of everything, essentially.
The programming system is not defined by words used for if
or goto
, those could have been si
or aller
.
Again, the sign (Scheme-symbol) is eliminated at read-time, and replaced with a memory pointer, an internal representation. In this sense, it does not matter.
The idea here seems to be that programming language standards may not have any prose.
That is there should be some description of clause transformations under an effect of syntactic signs.
I think that syntactic signs here are not just what Scheme calls "syntax", but also what Scheme calls "procedures".
I have no idea what Russel's 'theory of types' was and whether is had any relation to the modern static typing.
Perhaps, this can be related to Footnote 80 in Structure and Interpretation of Computer Programs (SICP). (page 649 in the Unofficial Texinfo Format 2.andresraba5.6)
When making recursive propositions, some very strange infinite series may arise.
I do not understand 😢.
Is it a prescriptive clause? Moreover, it's "how each sign signifies", not "what each sign signifies". I am confused.
Perhaps, it means the same thing: that as long as you have properly written syntax as clause transformations, you do not need to explain the semantics "in English".
It seems that the difference here is, again, between "a program" and "a correct program". Performance seems to be the issue here as well.
A "symbol" is "a value" here. So, the propositions that are "essentially the same" would be: (and a b) and (and a b #t) and (and a b "true") #t and "true" can serve as the same true value.
"all kinds of composition would prove to be unessential to a name." what does it even mean?
When you do not care about resource consumption, you can avoid using a name at all, and try to infer the correct value by writing all sorts of propositions about an object and telling the machine to infer the object from those propositions when needed. It it what is implied here?
I guess, this means that a variable cannot be "empty". If you introduce a name in a proposition, say, (display a), then in the worst case your interpreter will tell you "a is undefined", that is "the value of a is #undef". This may be an error, or may not be. But a will be in the obarray until the termination of the program, so it will have some kind of "value".
Well, each copy of a program is philosophically unimportant, but the fact that a program as a packaged thought exists, is quite extraordinary philosophically. Moreover, programs generally seem to encompass the way of thinking of their authors.
That should be the case if, once again, you ignore Input/Output, and, possibly, mutation. I guess, then you should be left with Turing-completeness? Or, maybe, some other kind of completeness?
I feel like he's hand-waving a lot about the "logical syntax", not even defining it properly.
On the other hand, again, if we define a datatype through the accessors, then "what is common to all the symbols" just means that "semantics allows us to abstract over concrete implementations, as long as the accessors still work".
So, is this just about making sure that Scheme and C are equivalent in the logical sense?
It would have been a nice testing heuristic.
I am not very sure, but it seems that "analysis" in this clause really is the same as the "analysis" in SICP. When a complex sign is transformed into a lambda, there is only one way of doing so throughout the program? Or isn't it?
Not very sure.
This is kind of obvious. Your proposition will eventually be broken down into the elementary variables, bits, that can be 1 or 0, and that is "the space" and "the place". The proposition will have sense, but may be false.
Kinda means that a relation is a subset of the Cartesian product of the input and output. First year mathematical logic.
A "thought" is a logical picture of facts, as he defines it. So, evaluating a proposition should give us a "logical picture of facts"?
Don't all propositions have sense by construction? Or, maybe, it's just complex propositions that are guaranteed to have sense. Basic ones may not have any sense.
Theoretical totality or practical totality? What about propositions that have no sense?
This clause is almost saying that "The theory in this book should not be applied to human languages, really". Leave "exact symbolism" to machines.
To me at least, this clause should be interpreted in the following way: What "makes sense" is really equivalent to "what we can write a program about, deciding whether things are true or false".
The "deep questions proposed in philosophy that have no logical sense" are not really lacking sense, they are illustrating the difference between that part of the language (and with it, the reality) that we have already harnessed by logic (and thus can reason about, and write programs about), and the part that we are capable of operating with (as humans), but do not understand well enough and deep enough to write programs about.
This is the central raison-d-être of philosophy: spotting those areas that exist, but to which the logical abstraction tree that allows us to create sciences and societies has not yet grown.
Fritz Mauthner, German author, theatre critic, and exponent of philosophical Skepticism derived from a critique of human knowledge.
I'd really like him to point out where exactly Russel shows this.
If we recall Eric Berne here, who introduced the notion of "transactions" in human interaction, and suggested that superficial transaction structure may not be the underlying one. "Let me show you my wonderful haystack in the barn" may be interpreted ambiguously.
Indeed. We are inputting those propositions into the machine, and thus creating a model of reality. Then we can query the machine and see if your model of reality is any good.
"We imagine it" should be really read as "as the program describes it".
In the "ordinary sense" here should hint us that machines should eventually recognise code written on paper with pen, and be able to interpret it.
"At first sight" should hint us that we are missing a huge lot when discussing programs without data (!). This is very important, as garbage-in=garbage-out.
Notes are a nice illustration here, because in order to generate wave-forms from notes, you need to have a sound bank, or do FM-synthesis.
Well, since logic is such a primitive domain, similar to manipulation of characters.
Emm..? How are sharp and flat "irregularities"? Aren't they a valid part of music notation?
Let us try to not overthink it. Music notation consists of notes. Obviously, sharp and flat are extraneous here.
So… a piece of memory, a file, is "real". We can take this text, written in an org file, and put it through a TTS engine, to obtain its audio form, or put it onto the display, to obtain a picture.
What I do not like here, is the mention of the fairy-tale. Horses and lilies are aspects of the concept, not views. A pair of lovers in a fairy tale is not necessarily bound to have a pair of horses. But maybe that is just me.
Fair point. Seems to be exactly about what I wrote above. The "symphony" is bytes, and we can generate different presentations of those bytes.
I would say "bound" rather than "contained". Your expressions have to be valid with respect to the language that you are writing them in. An you can, in principle, describe "all possible images" that can be written in, say, a 32-bit TIFF file. That set is not just well-defined, it is finite.
Not very useful illustration, but doesn't hurt.
Well, if the syntax of your proposition is correct…
If my compiler compiles the code, it more or less means that it understands it as much as it understands anything.
I think, this a little bit refers to the constructivist approach of computer logic. In order to formulate a computable proposition, you need to formulate it in a constructive way, "how to check that it is correct".
Does it mean that Wittgenstein's theory does not work for anything non-constructive?
This yes-or-no reduction is very common in Computer Science. Very few functions in Computer Science are formulated as "What is f(x)?" Rather, they are formulated as "Is the first bit of f(x) equal to 1?".
The dependency between internal and external properties is not so obvious. We may look at it from an information-theoretic standpoint: describing an object's external properties, as long as those properties are consistent with the basic laws of the world, we are (in general) limiting its internal properties as well.
Propositions allow us to infer these internal properties.
Example: a vector linear equation \(Ax=0\). This is an external property of the set of \(x\).
An internal property would be the fact any linear combination of \(x_1\) and \(x_2\) would also be a solution. Or something like that.
Propositions are created according to a set of "laws of the world", aren't they?
Maybe, "to understand a proposition" here means "being able to read a proposition"? E.g. a compiler "understands" a function, if it is written in a language that it is written to understand.
A Scheme compiler understands Scheme. If a function written in Scheme is correctly written, the compiler may run it and produce a result that will be correct. (Because we believe that the function itself is correct.)
That's an important point!
And actually not always true in Lisp.
What is being said here is that when we are porting code from Scheme into Emacs Lisp, we are not writing an interpreter of Scheme in Emacs Lisp, but rather we are replacing Scheme's constructs with Emacs Lisp's.
The specification of special forms must be given in a "human language" with a lot of hand-waving. Procedures, however, can be written purely formally.
(Remember that our world has no I/O, so we probably do not need any primitive procedures.)
That is, extract the "internal properties".
A subroutine is written using the language it is written in. (Rules of the world.)
The rest I do not understand.
Let's say that a proposition gives us some new information as long as it is extracting it from the "world", and therefore is "understanding" it.
It can produce a random 1 or 0, but this is not very useful.
We can see "running a program" in some sense as an evolution of a world, in which the initial memory state is the initial state of the simulator, or the initial condition in a Cauchy (initial value) problem.
Functions, therefore, represent "what is happening" in the world, that is, a situation.
"a silent and motionless group of people arranged to represent a scene or incident." – tableau vivant
This is kinda like…
When your procedure is loaded into memory, the symbols and the parenthetical structure are transformed into an abstract syntactic tree.
When you are resolving symbols, you are substituting them with the "actual things". So, a procedure, at the beginning of the evaluation, is like an image of the scene of an accident (event).
"Logical constants" here do not mean bits, but rather the primitives of the programming language (or instruction set).
Indeed, they are bound to be undefined and have no meaning for the machine itself, because "it just lives according to them".
It is possible to implement a language in terms of another language, sure, but it at the bottom of it there will be a meaningless substrate.
We will ignore the Latin reference, it is not very useful.
I guess what is meant here is that the picture of the situation is drawn with the logical strokes.
The problem of "sameness" appears all over the science. As far as I remember, bosons and fermions are the most well-known example in the old science.
In programming this means that "distinguishable parts" are literally in the same memory. If they are represented by pointers, these pointers point to the same memory address.
I guess, this means that although the pointers have the same value, and point to the same address, the pointers themselves would inevitably be in different memory cells.
Wittgenstein is basically defining Scheme's \(let\) here. And/or discussing lexical scoping.
‘Spatial spectacles’ here would probably be called ‘spatial coordinates’ by quantum mechanics. Or, maybe, "spatial representation".
I think, the point here is that it is not enough to know the initial coordinates to predict behaviour. You need to also know which particle is "same" with the other particles.
How can this even be? Maybe you can imagine this as having two observations of highly similar bosons in different positions. Then you add some spin to the boson 1. Does it mean that boson 2 immediately gets the same spin? May be possible if it is "the same" boson.
In computers you can interpret it like this:
If you (set! x 5)
at some point, it does not mean that everything ever represented as x
becomes 5.
You need to consider the scope.
I guess, we can just do it – compare reality with what the machine emits.
If this "reality" is really the physical reality, not some other Wittgensteinian concept I have missed. Let's see.
I guess, it can be interpreted as "the truth only exists in a machine". Your functions may give a true or false answer only with respect to the initial tape configuration.
Correctness, however, is independent off the initial tape state. If your function is expected to answer a question "are the bytes 1-14 set to 0", but instead outputs "true" or "false" at random, then it is just incorrect.
I am not entirely sure I understand the approach here…
But I guess, he is saying that it is incorrect to say that by always using (not (p))
instead of (p)
, we can usually make a new true proposition.
Again, unclear to me what he is saying.
The sign '~' can certainly be redefined, for example, to identity.
Again, the underlying reality is not affected by our usage of (p) or (not (p)).
I guess, in our case the tape has ones and zeros, but a-priori we cannot tell whether 1 is true, or 0 is true?
Perhaps, Wittgenstein is trying to convey to us the same idea that Abelson-Sussman in SICP describe with the (false? x) procedure.
Indeed, our interpreter has to have some intrinsic understanding of "truth". See section 4.1.3 Evaluator Data Structures of SICP.
Well, in Scheme we can usually extend syntax to some extent…
Unclear.
However, maybe it just means that unless your code is syntactically correct as it is interpreted, your machine will just "not work".
This sounds a bit pessimistic, though. Kind of meaning that a purely logical machine will not be able to exceed its own limits.
Or, configurations of bytes with respect to each other.
Simply put, "a natural science is anything for which a robot can be made to answer questions".
Above or below?
If philosophy is the area of thought about writing thoughts.
Or, maybe, about "encoding reality" in such a way that robots, then acting purely logically, would be able to do natural science.
This also seems a lot like "encoding reality".
Sure.
Aha, so philosophy is in "encoding the world", and "natural science" is then answering the questions about this encoding.
I guess, in commercial software this means that "analysis" is a philosophical job, and "implementation" is, then can be done in a purely scientific way.
Hm…
So, we're digitising/encoding reality, by philosophising, and at some point we are encountering a "Lower Bound" do this digitisation.
And it should also tell why our digitisation will fail if we try to go further.
Hm… what about those ugly partially convergent functions? Those that can give us a response in many of the cases, but not all? Uncomputable, undecidable? Kolmogorov Complexity, for example?
Well, "design is code", right? If you specify your procedure well enough, you do not need to write it, you already have it?
But what about performance?
Also, it seems that he is claiming that an "Electric Programmer" is not possible, because logical synthesis is, apparently, not a clearly defined process?
I think that he is missing the "Metacircular Interpreter" discussion.
For sure, there is this last layer, at which you have to express a language in the language of the machine, and the primitives cannot be decomposed further.
But metacircularity still needs a discussion.
Again, I think that some metalanguage reasoning systems, formal methods and such, can compute that inference in some cases.
Maybe, "not necessarily".
I think that this means that we need a language that is correct and not self-contradictory, then any other language can be reinterpreted (reimplemented) using it.
I guess, he wants to define what is external and what is internal here?
But he is not actually saying what an external property is?
And those "features" are almost the same as the "features" in machine learning.
So, this is kinda easy?
There bits represent a picture. No matter whether we want to make our algorithm distinguish pictures of cats from pictures of dogs, or just display a wallpaper, these bits are still thought to be a picture, not an audio wave.
So, "being an image" is an internal property of an array of bits.
I guess, "having a cat image in it" should be an external property?
Indeed, it is nonsensical to run an image recognising predicate on something that represents a sound wave.
This I do not really understand. Perhaps, the idea here is that functions which work with byte data can "swallow" both an image representation, and a wavefront representation, and give "some" result. We wouldn't know whether it is correct, unless we know what the bytes actually stand for.
I guess, this is speaking about one level of abstraction higher.
Say, a picture is a picture at address A is of a wavefront, which is itself digitised and placed at address B. There certainly may be a relationship between propositions operating on them.
And?
internal
relation a series of forms. The order of the number-series is not governed by an external relation but by an internal relation. The same is true of the series of propositions {‘aRb’, ‘(∃x):aRx.xRb’, ‘(∃x, y):aRx.xRy.yRb’,} and so forth. (If b stands in one of these relations to a, I call b a successor of a.) Isn't this the Church encoding, or something?
Why is it an internal relation? A full order relation… seems to be derive-able from +1.
When a formal concept represents a thing, it cannot be expressed as a proposition.
I think this is quite understandable.
A can in a picture (represented by a byte array) of a cat, has a formal concept of being a cat. We can represent this as a proposition (concept proper), but this proposition cannot be guaranteed to be 100% accurate (because, naturally, recognising images is hard!).
I am not very sure what this means.
Kinda… if we formally substitute and expand everything… And start seeing functions as operating on whole sets?
Isn't it that in machines "formal concepts" are only 1 and 2?
object
. Wherever the word ‘object’ (‘thing’, etc.) is correctly used, it is expressed in conceptual notation by a variable name. For example, in the proposition, ‘There are 2 objects which…’, it is expressed by ‘(∃x, y)…’. Wherever it is used in a different way, that is as a proper concept-word, nonsensical pseudo-propositions are the result. So one cannot say, for example, ‘There are objects’, as one might say, ‘There are books’. And it is just as impossible to say, ‘There are 100 objects’, or, ‘There are χ_0 objects’. And it is nonsensical to speak of the total number of objects
. The same applies to the words ‘complex’, ‘fact’, ‘function’, ‘number’, etc. They all signify formal concepts, and are represented in conceptual notation by variables, not by functions or classes (as Frege and Russell believed). ‘1 is a number’, ‘There is only one zero’, and all similar expressions are nonsensical. (It is just as nonsensical to say, ‘There is only one 1’, as it would be to say, ‘2+2 at 3 o’clock equals 4’.) No, I do not understand that.
If we use Church-encoding, we can avoid using primitive numbers.
And these will be "different" ones in the different propositions.
However, maybe he is speaking about "interning", functions, numbers, strings?
Or, when speaking about "machine learning", about labels of the objects?
It would make sense to intern labels?
In Lisp we seem to have an opposite view. There are "interned symbols" and "uninterned symbols".
The key here, I guess, is "primitive ideas".
I think, Wittgenstein stresses the word "primitive". It is possible to introduce both "numbers", and "number 1", but one of those has to be non-primitive.
This is kind of like trying to invent a lambda-expression, while not yet knowing what a lambda is. Trying to encode a function in a formal notation.
Poor Wittgenstein, born too early.
I think this means that the usage of something in a piece of code implies semantical existence of this something.
When we use a label in a machine-learning program, say, to distinguish chairs from tables, we almost imply that the very concept of chairs exists, since we use it in our code.
without
number. Hence there are no pre-eminent numbers in logic, and hence there is no possibility of philosophical monism or dualism, etc. Why are they without number? Alphabets generally generate a countable number of words.
What does "no possibility of philosophical monism or dualism"? I remember that monism and dualism refer to the unity or separateness of brain and body.
That's a bit Popperian "verifiability" and "falsifiability".
A procedure has sense if it makes some inference about the input data, what is and what is not the case.
Seems to be, essentially, leaving input untouched and always returning "true".
Unless your computation system is self-contradictory?
Not obvious.
Maybe I am overthinking it? An elementary proposition is just a state of the bit N in the input? Or, maybe, subset of bits.
Not obvious. It is true that propositions may be symbolic expressions, and if they obey certain syntax, they will, essentially consist of juxtaposed variables… and function calls. These function calls are important.
This is supposed to be a commentary to the "law of forming propositions". Seems not very useful.
Mumbo-jumbo.
Does it mean that an "elementary proposition" has no variables, essentially?
Okay, fairly standard notation.
There is some caveat here between evaluation and substitution, defun and defmacro.
In his case, "=" is more like "eqv?", than "eq?".
An ‘a = b’ can also be false?
So, he is more or less trying to define logical equality in this subchapter?
In Scheme we have "eq?", "eqv?", "equal?" to represent "same" as in "same place in memory", "almost the same", and "equivalent in meaning".
Why not the other way round?
Perhaps, if we only speak about bits, then "true" would be equivalent to 1, and "false" to 0.
Ah! I didn't understand that.
For Wittgenstein, not all of the underlying structure of the world accessible through input. Some memory cells are inaccessible as individual cells. (But, I guess, are accessible through propositions acting on blocks.)
At this point something start worrying me about Wittgenstein's mathematical skills.
The sum is actually \(2^n\). And this is in perfect accord with \(2^n\) possible configurations of bits. It seems that his "states of affairs" are really bits.
Yeah, again, basically, input.
Again, basically, bits.
p | q | r |
---|---|---|
T | T | T |
F | T | T |
T | F | T |
T | T | F |
F | F | T |
F | T | F |
T | F | F |
F | F | F |
p | q |
---|---|
T | T |
F | T |
T | F |
F | F |
p |
---|
T |
F |
Oh, right, the first three binoms.
Basically, saying that you can turn any boolean function into a conjunction of disjunctions.
Again, since we have no input and no randomness, input is the only thing left.
depends
on the understanding of elementary propositions. Not very clear.
But clearly, all computation should eventually reduce to operations on bits.
We already know that \(K_n=2^n\), that's all possible bit-arrays.
Is that even true combinatorially?
How did he get this number, proponent of clarity?
Is it the maximal number of clauses in an conjunctive form? A proposition can be conditioned on 1 to n variables, and the i'th subset of those 1 to n variables can be in \(2^i\) configurations, each of which may or may not deliver truthfulness to the proposition.
Ok?
This is almost again, reference to the "disjunctions of conjunctions" form (that "expression").
"Formula?"
In our case it would be "a program".
Does he mean his own peculiar definition of objects, or objects in general?
Or, is he saying, again, that only human interpretation gives meaning to Ts and Fs?
@=
p | q | |
---|---|---|
T | T | T |
F | T | T |
T | F | |
F | F | T |
This seems to be just his peculiar notation. Yes, if you fix the order of variables, you can define a predicate as a bit sub-set on the bit set representing all possible states of your input and do a table lookup, instead of computing the value.
Yeah, elements in the disjunctive form.
tautological
. In the second case the proposition is false for all the truth-possibilities: the truth-conditions are contradictory
. In the first case we call the proposition a tautology; in the second, a contradiction.Again, first-year mathematical logic course.
Tautologies are also called "laws of logic".
Indeed, but tautologies may also serve as a way to simplify expressions (reduce formulas).
Contradictions may serve as compile-time checkers of correctness.
So, supposedly, we are interested in creating "theorems", that is, tautologies with a colossal number of input bits, that are hard to infer by brute-force.
all
possible situations, and the latter none
. In a tautology the conditions of agreement with the world — the representational relations — cancel one another, so that it does not stand in any representational relation to reality. I guess, he implies that logic is above reality here.
I really like this metaphor.
I can also relate this to an operator defining the behaviour of a system. You take an initial condition (input), and evolve it.
That is a very nice link.
What is a "logical product"? I guess, rewriting a function using an information-preserving rule (tautology)? However, without altering its sense, may still mean "making it a lot faster".
absolutely any
combination corresponds. In other words, propositions that are true for every situation cannot be combinations of signs at all, since, if they were, only determinate combinations of objects could correspond to them. (And what is not a logical combination has no combination of objects corresponding to it.) Tautology and contradiction are the limiting cases — indeed the disintegration — of the combination of signs. I am not very sure what he means by "combination" here. May be a combination in the Scheme sense, a computational graph, or just a multidimensional (in the sense of run in parallel) single-bit propositions.
symbol
. So, you can write a procedure with a lot of operations in it, but if it always produces #t, or #f, all of that operations are in vain.
Optimise them out!
any
sign-language whatsoever
in such a way that every possible sense can be expressed by a symbol satisfying the description, and every symbol satisfying the description can express a sense, provided that the meanings of the names are suitably chosen. It is clear that only
what is essential to the most general propositional form may be included in its description — for otherwise it would not be the most general form. The existence of a general propositional form is proved by the fact that there cannot be a proposition whose form could not have been foreseen (i.e. constructed). The general form of a proposition is: This is how things stand."This is how things stand."
Seems true, but useless?
In other words, there is a function that produces all of the information about the world described in the input – it just outputs all the input verbatim.
all
elementary propositions: then I can simply ask what propositions I can construct out of them. And there I have all
propositions, and that fixes their limits.This statement is a little shaky with infinite inputs.
But if input is finite, then all possible conjunctive forms on the input length − all possible propositions will give the list of all propositions.
Again, their number is colossal.
totality
of them all
). (Thus, in a certain sense, it could be said that all propositions were generalizations of elementary propositions.)Seems clear. All meaningful functions on N bits, we can pre-compute them all, and relax.
And then we can use that "seemingly, existing" function output in new computation?
Is that what he means?
Because everything is eventually boolean and binary. Sort of.
Because functions have to work on something.
cannot
be understood unless the sense of ‘p’ has been understood already. (In the name Julius Caesar ‘Julius’ is an affix. An affix is always part of a description of the object to whose name we attach it: e.g. the
Caesar of the Julian gens.) If I am not mistaken, Frege’s theory about the meaning of propositions and functions is based on the confusion between an argument and an affix. Frege regarded the propositions of logic as names, and their arguments as the affixes of those names.This is a nice discussion of the difference between names and applications.
Something that seems totally obvious to programmers.
With generalised functions, though, you could have (+ 'cardinal a b)
Unclear.
Theory of probability is, in fact, totally deterministic, if we take distributions as the basic property of existence.
truth-grounds
of a proposition to those truth-possibilities of its truth-arguments that make it true. (T T T T) (p, q) Tautology (If p then p, and if q then q.) (p ⊃ p . q ⊃ q) (F T T T) (p, q) In words: Not both p and q. (~(p . q)) (T F T T) (p, q) ,, ,, : If q then p. (q ⊃ p) (T T F T) (p, q) ,, ,, : If p then q. (p ⊃ q) (T T T F) (p, q) ,, ,, : p or q. (p v q) (F F T T) (p, q) ,, ,, : Not q. (~q) (F T F T) (p, q) ,, ,, : Not p. (~p) (F T T F) (p, q) ,, ,, : p or q, but not both. (p.~q : v : q.~p) (T F F T) (p, q) ,, ,, : If p then q, and if q then p. (p ≡ q) (T F T F) (p, q) ,, ,, : p (T T F F) (p, q) ,, ,, : q (F F F T) (p, q) ,, ,, : Neither p nor q. (~p .~q or p|q) (F F T F) (p, q) ,, ,, : p and not q. (p . ~q) (F T F F) (p, q) ,, ,, : q and not p. (q . ∼p) (T F F F) (p, q) ,, ,, : q and p. (q . p) (F F F F) (p, q) Contradiction (p and not p, and q and not q.) (p . ~p . q . ~q)
I guess, this is a first step in rewriting conjunctive forms into something algorithmic.
truth-grounds
seem to mean "subset of input when the function is true".
We should be able to avoid computing the second function then.
Same.
Seems like another optimisation opportunity.
What about the cases when p→q, but not always the opposite?
I guess, this should urge us to believe that we can find those “propositions that follow”.
Okay… Not sure I understand why this is important.
This is how two propositions may behave on the same input.
Not sure I agree. Suppose q is true if p is true, but never uses p directly as a call. Then the structures are unlikely to show us the connection.
Perhaps, if "can" is seen as "it is possible, although may be computationally expensive", then I am fine.
Since that "follows" relation only depends on the input.
His notation is ugly. But generally he is saying the same thing – logical connections are defined by operations on input, not structures of procedures.
This almost seems as if he is suggesting that interpreters should be written in themselves.
Meaning, not conditioned on input? Because for full generality, deductions should be input-independent? (Be tautologies?)
Of course, because they are input.
Different inputs are just different inputs.
Even if the inputs we want to process are not evenly distributed, we can reduce them to inputs that are evenly distributed.
I think, this should be seen as: "Do not confuse system evolution and different inputs."
An interesting thought!
I guess, we can see "freedom of will" as "true I/O". Choosing the next inputs.
Hehe.
A political statement.
But, again, their computational complexity may be different.
Logically, yes. Computationally, though, speed, again, is the key. Furthermore, we may have different places in memory corresponding to two identically behaving procedures.
He is repeating himself. We can use tautologies for optimisation and rewriting.
That is a metaphorical statement? What is a "common factor"?
That's a bit markovian in spirit. That's, again, if we assume that inputs are uniformly distributed.
An example, ok.
We can compute probabilities by simulating successes and failures?
Fine, as long as there is some intrinsic uniformity to the input. (Maybe the non-uniformity of input is called "luck".)
Again, this depends essentially on the input.
Here Wittgenstein is trying to justify statistics rather than probability. Still, not very convincing, but let it be.
Not very rigorously defined.
So, a probabilistic proposition is a proposition about other propositions. Isn't this meta-logic, again? Your propositions must be in the memory.
This "complete picture of something" is important.
Let's think about this for a moment. What is a "structure of a proposition"?
"Internal relations" are the ones determined by the structures themselves, rather than other propositions.
Aha, that's what in Scheme we call "combinations".
However, Wittgenstein needs to define what an "operation" is.
Again, that's easy, but not.
(and elementary-proposition-1 elementary-proposition-2)
is an expression, with two elementary propositions, and rule, wrapped into a combination.
Again, this is obvious, but informal.
The result, I guess? Because currently, in this form, we see no obvious dependency on the internal similarity.
Are these sequences always expressible logically unambiguously? What about those aperiodic tilings… uncomputable ones?
I think this claim is wrong.
Does it mean that operations are only defined "by example"?
I guess, "logically meaningful" is the important here.
Perhaps, that is why syntactic structures in Scheme are only accessible compile-time?
Boolean functions, basically?
And those have to be "basic" operations, I guess.
I think that "manifests" here means, either "can be assigned to a variable", or "it's value given input can be assigned to a variable".
Seems like he's saying that the "operation" is not inside the input, but rather an external thing, from the domain of "logic" (or "computation")?
Ow… maybe he's actually struggling with recursion here?
His obsession with "variables" in this chapter comes from an inability to distinguish "eval" from "substitute"?
Again, he seems to be struggling to properly distinguish "functions", "macros", and "primitives".
Ha! A function cannot be its own argument. I think that the logical system constructed by Wittgenstein is actually weaker than Lisp.
And he still hasn't properly defined "functions".
So an "operation" is like a macro mixed with application. For him, functions cannot call themselves, but operations can iteratively apply functions to their own outputs.
more than one
operation to a number of propositions. What is ξ here?
So, he desperately needs looping constructions in his language, and is trying to invent notation for them.
Yeah, I guess, looping has been boggling scientists for a long time.
Well, if information is not lost. Although, I guess, if your machine time is cheap, you can recompute everything from scratch. This is basically, in the worst case, backtracking.
He has to make not
an operation, because his functions are somehow dysfunctional, I guess.
Again, "truth-operations" are intertwined with "truth-functions".
Okay, so for him, functions are "substitutable", not "evaluatable". But operations are always eager. For primitives, it is irrelevant whether they are lazy or eager, because they are essentially bits.
An example of evaluation.
I think, there is some trouble with undecidability here.
What is this Frege and Russel sense? Perhaps, he wants to say that there is no need in such a thing as "logical object", if it can be expressed in functions and operations.
Didn't he himself objected to the idea of "same" functions?
Some polemic with other language builders. How familiar.
"We can express this in that, therefore this is not fundamental. Get lost, you, Common Lisp people."
For them at the time it is still not obvious that everything is basically one huge array of XORs.
The problem is usually not infinity, but rather fighting infinity.
Again, evaluation is boggling him.
Not sure I understand this example. He suggests reducing everything? Not always works.
I guess. Shall we use them for precomputation? all is usually unmanageable.
No, I do not understand. Those "all truth-operations" may need to evaluate other propositions.
I guess, this means that evaluation should have proper semantic, and primitives of the language must be described somewhere, in a language standard.
Well, even in Scheme we have different versions of begin
.
Other languages are even worse in terms of defining primitives.
Principia Mathematica
there occur definitions and primitive propositions expressed in words. Why this sudden appearance of words? It would require a justification, but none is given, or could be given, since the procedure is in fact illicit.) But if the introduction of a new device has proved necessary at a certain point, we must immediately ask ourselves, ‘At what points is the employment of this device now unavoidable
?’ and its place in logic must be made clear. I guess, in the same way as Wittgenstein dislikes truth constants (I think, he prefers having 1 and 0 instead. And a (false? ) predicate), he dislikes numbers as primitives. Russel has 0 as a basic, number, I think, and a +1 operation. Church encoding lets you live without even 0.
Is this again a reference to reasoning over types instead of instances?
I think that this is an emotional statement akin to the ones programmers call KISS. (Keep It Simple, Sir)
Yes, but how is he going to introduce this "most general combination"? By a generative grammar?
Hehe, in Scheme everything is used in sexps. Homoiconicity eludes him.
Nice metaphor, but incomplete. Indeed, in human languages, punctuation marks are used for many "operator-like" purposes, but I do not think that logical operators are from the same category. Maybe brackets are.
We need to define "the form of all propositions" before doing any reasoning. Whether this "form of all propositions" is formal syntax or formal semantic, I am not sure.
I think that he still needs #f as his logical constant. On the other hand, \(A \land ~A\) is false, so maybe not even that. All the other propositions can be combinations of elementary an primitive operations.
Like, being combined according to the laws of the language.
That is the law of combination of propositions?
This seems like a difference between a compile-time (or read-time) and run-time error. "Socrates is identical" is a run-time error, but not a compile-time error.
I think that elementary propositions still have to be self-evident.
Because we cannot give a sign any sense other than it has according to the laws of logic it is written in. But we can feed a Scheme program into a Common Lisp interpreter and observe all kinds of errors.
He is ignoring performance considerations, again!
An example of confusion, I guess? One of "identical" is a symbol that has to resolve to something.
The other, I guess, has to be, in Wittgenstein's words, an "operation", that maps Socrates to Socrates, or Socrates to the value of Socrates.
I guess, the number cannot be 0. But with XOR we should be able to do anything.
Unclear. Does by "multiplicity" he mean "being able to appear as argument in a proposition"?
A question of language design, essentially.
Is he actually trying to introduce the Horn rule here?
Either any of the (not sub-proposition)
is T, or the proposition is T.
So now Wittgenstein is trying to introduce list processing.
Looks plausible? What about those ugly uncomputable sequences?
That's just notation?
So he's trying to find an exact expression for this "list negation operation"?
Again, he seems to be writing an explanation of the Horn rule. See https://en.wikipedia.org/wiki/Horn_clause
Emotional clause. I think that it is not "the world" that is using crotchets and contrivances, it is our human brain, especially its analytic part (that is not unlike a turing machine).
Hm… I think there is a confusion between the universal rules and the concrete values.
He certainly struggles with understanding something here.
We cannot just substitute the value of p into the expression.
We actually need to evaluate (not p)
here.
This is still speaking about substitution vs evaluation. And hence, reduction/optimisation in the substitution case.
There should be some way of rewriting each function that returns 1 only when p and q are both 1 using (and p q)
, and never using p or q directly.
Perhaps, this is also an intrduction to the idea of abstraction.
Equivalent to symbols… I think, in Scheme-speak it will mean that "rules" (whatever that is) will be indistinguishable from "variables" that can be resolved. But variables will resolve to values, and rules will resolve to combinations.
I think that what he is trying to say here is that logical operations should be closed, that is should be able to be combined indefinitely.
So, bits can be flipped, just that.
all
from truth-functions. Frege and Russell introduced generality in association with logical product or logical sum. This made it difficult to understand the propositions ‘(∃x).fx’ and ‘(x).fx’, in which both ideas are embedded. For him, generality is not a part of the logical system.
What is a "logical prototype"? Why would it emphasise constants?
To a function? Or in a logical derivation?
That's clear. Input cannot grow.
There is something important here that I do not understand… So, instead of "X is possible", he wants to say "X has sense"? That is, instead of "there is some argument on which X gives a correct answer", he wants universality over input?
So, still, he insists that the logical system should be complete. Then substituting the initial condition, we should get the actual trajectory in the state space.
I think that later logicians prove this to be impossible. But for a closed world, let it be.
Still, this reminds me about the fact that we can do some fun stuff with no data, only programming the machine with a tiny seed. Mandelbrot and stuff.
I guess… unless you have no input…
Entirely general propositions… I guess, means propositions that make sense on every input. The most general… should depend on all of the input, I guess?
So one bit changed in the input results in the total change, which can be reflected in the behaviour of the function that uses all of the input.
How does this correlate with his discussion of equality?
4.24, 4.241 and such?
only
a satisfies the function f, and not that only things that have a certain relation to a satisfy the function f. Of course, it might then be said that only
a did have this relation to a; but in order to express that, we should need the identity-sign itself. And? Yes, a complex logical expression is optimisable. But optimisation is not free.
This is, again, the difference between eq?
and equal?
.
I think that this is his big mistake.
Maybe, when you are modelling the world, you need to make sure that all things that are equal?
are also eq?
, but you can do a lot of things even if that condition is not satisfied.
Again, he thinks that this clarifies things, but that it so unessential to philosophy.
Yeah, yeah, programming style should be good. Don't make useless variables.
But now we do not consider that terribly important because we machines can spot a lot of errors like this.
Seems that Wittgenstein gets really annoyed by extra notation.
How would he solve cases when he does not, at first, know that x=y, but that is actually derivable?
So, he has derived this particular logic of his, in which it is possible to avoid using equality in exchange for the manipulation with quantifiers.
In 2021, I think we would have consider this a defeat, rather than an achievement.
Working with quantifiers is a pain, working with equality is a bliss.
It is interesting idea that you "cannot write meaningless code".
This is the first time he actually refers to a concrete place in the works of his predecessors.
The "axiom of infinity" is defined at 120.30 or something like that, in the second volume of Principia Mathematica.
In any case, it would have been amazing to work in a world where distinct things are by default always distinct in description, but that is, sadly, almost never the case.
So, Wittgenstein insists that propositions, and, perhaps, other logical constructs cannot be reasoned about by the system they are parts of.
This sounds both good and bad. Good, because I always found just wrong the statements similar to the 'Godelian proposition'. (That one that has a number, and states that the statement with that number is not provable.) This sounded wrong, because interpreters should not be able to reason about themselves.
On the other hand, why not? There are virtual machines, there are metacircular interpreters. Why not, after all?
This clause is interesting because it uses ‘~(∃x).x = x’. Indeed, I see what Wittgenstein was annoyed about.
Because in his world, truth-operations evaluate.
Ah, ok. He seems to be repeating his premise that relationships other than binary are not useful. Ok, now that in computers our world is binary, this seems obvious.
Basically, this is a call for an introduction of more constructions into the language itself.
This requires a bit of thinking, but essentially, means that since everything consists of primitives, and people consist of molecules, and their behaviour is computable, there is no such a thing that "makes decisions".
Maybe, under "superficial psychology" he means "free will". If everything is computable, I guess, you can say that there is no free will.
A compiler should not compile senseless things. I guess, Wittgenstein would like Haskell.
Elementary propositions consist of names. Without loss of generality, just two names, 1 and 0.
Cannot give the composition of elementary propositions? I cannot understand.
This is very practical and should be compulsory to read for all physicists.
My English parser got broken here. Indeed, the replacement of "what?" with "how?" is quite a step forward in terms of theory of knowledge.
"What?", I guess, should be defined by philosophy, not logic.
Well, you can do all sort of fun logical games, such as plotting the Mandelbrot set, without any input.
No, I cannot understand. Indeed, numbers are not fundamental, as Russell proves. So, relations between objects are not fundamental too?
Does that mean that there should be an algorithm to generate random valid code?
Does he imply the need for lower bounds? At least in the length of the source code?
In general, there are various methods of finding lower bounds.
Why not? Syntax checkers are not new. Static/dynamic analysis tools are not new.
That "insight"? Indeed, he is already thinking about an algorithm for generating true statements.
Input bits are all equal in rights.
The first and the second statement are true. The third does not seem to follow from them, but I can see why it should be the case.
Well, everyone who understands logic, I guess. This kinda implies the ability to implement an algorithm in any language.
No, wait, people are very capable of creating nonsensical sentences.
Everyday life is a very complex thing actually, to model.
I think this is an emotional stance against breaking abstraction barriers.
Is this a working method of thinking? Announce facts, and try to reason whether they are meaningful?
The limits of my language
mean the limits of my world.Presumably, we are thinking with language. But let us imagine a person who used to have the sense of smell, and then lost it. He still remembers what the smell is, but it is not longer in his world.
What we cannot digitise, we cannot produce any program about.
means
is quite correct; only it cannot be said
, but makes itself manifest. The world is my world: this is manifest in the fact that the limits of language
(of that language which alone I understand) mean the limits of my world.Solipsism cannot be "articulated", because it itself presupposes the fact that the words of the articulation will not be transmitted anywhere.
Computo ergo sum.
Solipsism.
In fact, this is how good texts are written! A writer gives an account of himself first, in order to let the readers understand from which viewpoint he is writing.
I remember that there used to be a similar tradition in the common law court procedures. Explain yourself first.
A computer.
Well, in a computer we have a lot of things to query the underlying system. cpuid, cpuinfo, just reading the interpreter's memory.
(Wittgenstein here has a sketch of a potential visual field.)
So, the input that we expect to be a picture of a chair may actually be a wavefront on the camera sensor. We just do not know.
Okay, so he is actually a solipsist.
So, "self" in philosophy is a model of the interpreter that is running our simulation.
All possible p (inputs), all possible ξ (propositions), all applications of the Horn rule, which should tell us whether non-elementary propositions are derivable?
Yah, seems like everything should be one giant Prolog interpreter.
Also, ξ is off colossal size.
So we first generate all possible outputs.
The argument here is, I guess, all propositions (not necessarily elementary) which have already been proven.
Again, this is kind of Church numerals.
Yes, Church numeral.
This is the idea we often see in later mathematics, although not that much in programming. "What behaves as X can be used as X".
This +1, I guess, contains a lot of stuff under the carpet.
Does he mean the "theory of sets"? Or the distinction between classes and sets?
Or 'laws of logic'.
But they can condense information from the input.
Fortran has a bessel function as a primitive. Surely superfluous!
The status of "laws"?
Well, make your programming languages consistent.
And? Is this the property of a particular logic? Or logics in general? Or this world? Or human brain?
So, laws of logic are tautologies in the logical notation. Is that the thing Wittgenstein want to say?
Again, we use tautologies to optimise the code.
And we use contraditions to find bugs in code.
It's kind of like a sketch of a method for the creation of new tautologies…
I am trying to think whether this is actually an inverse way of writing the Horn clause? Like, in the "Definite" form of it.
But, I thing he still needs his 'truth-operations'.
So, this seems to be an algorithisable rule. Is there an inference engine that supports this way of inference?
Well, logical law must be true for all possible inputs. I guess, unconfirmable here can be seen as "you need to check it on all possible inputs".
Not obvious to me. If it is all subjective and non-verifiable…
And programming is seen as a theory of writing code?
Well, Godel's incompleteness theorem relies on the ability of applying the laws of logic to the laws of logic.
This 'cannot be subject', isn't a law of logic then?
Yes, #t is a tautology.
Axiom of Reducibility is in Introduction, Chapter 2, Section 6 of Principia Mathematica.
It roughly says that for every function f: A -> B there exists a predicative version: g: (A,B) -> {0,1}
Well, since in Wittgenstein's theory it is possible to avoid non-binary things in general…
"It is possible to imagine a world in which the axiom of reducibility is not valid".
How is it possible? Well, programmatically, it is obvious. You just code in f(a), but not g(a,b). However, writing g having f and equality is trivial. Although Wittgenstein's logic has no equality, but I remember him providing equality as a composite logical operation.
But in "reality", whatever this means, it's absolutely self-evident that if f exists, g exists.
It is possible, though, that f is not computable, while g is computable.
The last sentence is important. If you have a compiler, you have all the possible logic inside of it. And at least logically, since all compilers are Turin-complete, all logics are roughly the same.
Well, if they evaluate to 1?
What is a "surprise"?
Well, informationally, logic only squeezes information from the input, it cannot do more.
But for me, certain results are still surprising.
Again, this suggests that optimisation is a valid area of logic.
I keep saying that speed is also important.
In Prolog, the results comes together with a derivation of that result (if you do not use cut
).
But, again, computability leaves a place for surprises.
Say, you make your computer compute an uncomputable F(x), on some x, and it halts.
This is a surprise, isn't it?
Yes!
One depends on input, and one is not. One is an optimisation technique, another one is a result of working on the input.
I think, he is being desperate here. Many languages have facilities for metaprogramming. Modus ponens eventually makes its way here with the Horn rule.
Really? There is no way to "fit" a proposition by means of an inferential procedure?
Well, as long as they form a complete set. (In the Godelian sense.)
Well, having huge programming languages is usually not a problem. The opposite direction is usually harder.
And still, we develop it. We at least develop new, stronger programming languages and provers.
Intuitively it relies on the "axiom of reducibility". I am not sure I understood Wittgenstein's proof of why it is not required, but so be it.
A thought is a "logical picture of facts". Since logic is outside of the world, it does not consist of facts.
Sure, we use mathematics to predict something in the "world", not for itself. If we want to prove something with agda, we still want social appreciation.
And, again, it is belived that you can express one in terms of another.
Again, this confusion about equality.
eq?
is not the same as equal?
, and an equation is not a law, as it is only true for certain x.
Logic, not the world.
I think this use of the word "equation" is not correct in the modern sense.
I think that Frege is right, and Wittgenstein is wrong.
They "mean the same" for some x. Their "senses" for arbitrary x's are different.
Well, we still do a lot of mathematical experiments.
I think this is again due to the lack of eq?
in his system.
Or, maybe, recursive eq?
.
For a particular x.
I think this works only for algorithmisable problems.
Yes, there is something wrong about proving statements with brute force.
I agree.
Because it is not about the world? Meh, we are still drawing a lot of inspiration from the real world when programming or deriving. We the people are also a part of the world, even though for solipsists it is not so evident…
Well, that's not true, unfortunately. For high school mathematics, may be. But for actual mathematics… no. Although perhaps with automated provers you can do at least a part of work by pure substitution.
Can this really be called "a proof"? And frankly, it seems much more evident if seeing powers as tokens that can be tossed, rather than pure substitution.
Law is the behaviour or the compiler. Outside is the input.
And, still, I think that it is used axiomatically, in general.
Wikipedia says "Proofs or constructions using induction and recursion often use the axiom of choice to produce a well-ordered relation that can be treated by transfinite induction. However, if the relation in question is already well-ordered, one can often use transfinite induction without invoking the axiom of choice."
So, "often" is not always.
Form of which law?
Hm… is the idea here only that causality principle can be reformulated in terms of other principles?
The law of least action is clearly a logical construct. I am still not convinced that causality is.
We have "first integrals". And since physics obeys the differential equations with extreme precision, conservation laws naturally arise.
Ok, I agree. They first appear as computational tricks, and later are supported by evidence.
So, mechanics is not considered to be science here, right? It is mathematics, that later forces the input to be interpreted in a certain (inclined to differential equations) way?
I am thinking that he is missing the interpretation off output here.
So, mechanics tells us nothing about the world until we run our mechanical model, written in the language of logic, on the input… and compare with the measured results.
If the result is precise – the description is good and the world is mechnical.
That's like, the Hilbert's 6th problem? We would still need to digitise the input, but if we have a complete multiphysics simulator, we would be able to model a world.
Is "physics" here notably different from "mechanics"?
At least F=mg should be replaced with Gmm/r^2.
Again, here mechanics is seen as a sufficient development of the laws of logic (of moving bodies).
Cannot be said, I belive, here means "cannot be digitised". Indeed, you program the logical system, and causality is an intrinsic property off this system. You cannot "declare" the system to be causal, because it is the way you are writing it.
Well, "thinkable" here, I guess, should be seen as "computable". There are not connections other than those you write in the code.
Who is the Hertz he is writing about? Same Heinrich Hertz, a physicist?
Well, our input is immutable, so there is no time other than machine time. The machine can count cycles of computation, but suppose it is hibernated, or even unplugged (NVRAM machine, obviously).
I think this clause is due to the fact that in computing time is ill-defined.
I think that here Wittgenstein is trying to approach the problem of verifiability. Logical correctness should be established by comparing with "other digitization", or with "other net", and this glove example is just an example of something hugely disparate.
Because everything we are discussing must be digitised.
He is a bit manipulative here – his "can happen" should mean "can happen in a machine".
Cannot be describe means "our model does not support that".
Less code -> less errors.
Well, when you find a discrepancy, you implement more code.
Because that is how models work.
In fact, they do not really exist, but are just programs we write to predict further observations.
God is a scientific hypothesis, but it has very bad explanatory power, as it is essentially a huge dictionary.
The modern system is much straightforward algorithmically, but has better predictive power.
Ok.
Well, we can work to make wishes happen.
That is, if free will exists.
However, free will does not exist in a machine.
Well, this means that we should be able to find a certain input that makes our dreams happen (possible in the model).
But that is only in the machine.
Well, I can't avoid nitpicking on the Bose-Einstein condensate (because the model is different).
But yes, in classical mechanics this is prohibited by the digitisation model.
What is even a "value" in this case?
I guess, this is a reference to the free will again. A machine does not assign value to the simulation. Operators may want certain outcomes (value them more), and thus try to find conditions that satisfy them.
Like, "A is good" is not a proposition, in the general sense, because good and evil are subjective.
I believe, it is possible to digitise some ethical system and make inferences, but since you cannot compare it with ground truth, it is meaningless.
I can't say it is "clear", but this certainly seems to be the case. Subjective -> unverifiable.
Yeah, again, because reward and punishment are not ethical per se, and feelings of good and bad are not verifiable and ill-defined.
Again, the main conclusion from this statement is that modelling ethics on a computer is unlikely to be successful.
There are two "will"s. One is "want", the other "actions".
Wanting cannot be classified as good or bad, it is intrinsic, actions are not even obvious to be a subject of "free will".
So, his position here is that logically it is unlikely to condense any useful prediction from "digitisation of ethics or will".
That is almost the philosophical treatment of set!
in Scheme.
Mutation creates a different world.
Maybe we can even expect the "digitised input" of a human brain to be different from that of an unhappy man.
Again, emotions destroy predictability to a large extent.
In any case, all of that is not need if there is no free will, and people are just computers.
The simulation ends. Each time you turn off your computer, you are destroying a world.
Well, time is in general ill-defined in computers. If you freeze you program in a debugger, it kind of exists forever.
So, the point here is that we all may be living in a simulation, which something outside of this computer is running to model/predict something.
Or, rather that we cannot disprove the opposite.
Just as the memory state is not aware of human interventions in a debugger. (There are different anti-debugging tricks though!)
Because logic is not a part of the input. (Again, Lisp provides self-mutating programs, so maybe this is a bit obsolete.)
Well, we do not know and by the laws of logic cannot know what is the machine that is running the simulation.
"This book is dedicated, in respect and admiration, to the spirit that lives in the computer."
“I think that it’s extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I don’t think we are. I think we’re responsible for stretching them, setting them off in new directions, and keeping fun in the house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don’t become missionaries. Don’t feel as if you’re Bible salesmen. The world has too many of those already. What you know about computing other people will learn. Don’t feel as if the key to successful computing is only in your hands. What’s in your hands, I think and hope, is intelligence: the ability to see the machine as more than when you were first led up to it, that you can make it more.”
—Alan J. Perlis (April 1, 1922 – February 7, 1990)
Again, because all is just an evolution of a machine memory.
Although, I think, Wittgenstein is, again, unaware of undecidability.
Nonsensical here is not a negative characteristic.
Doubt means "being unsure your program is without errors".
Scepticism, I guess, whether you have digitised your input correctly. And this is nonsensical, because incorrectness of digitisation cannot be distinguished from incorrectness of logic.
So, he believes that it is possible to make a complete logical model of the world. (Which itself has been disproved.)
And the answer is that life has no "problems of life", those are meaningless.
But due to incompleteness of mathematics, there will always be problems to prove.
Why do we program? Because it is fun?
Anyway, it is not about logic and is unlikely to be answered in precise statements.
Machine learning is (still) an example of such a thing. Clearly works, and is humiliatingly unclear about how it actually works.
If we explain them at some point, they will stop being mysterious.
And it is the way to digitise the world.
The main point off this clause is not the denigration of non-natural sciences, but rather the idea that evern soft science, via the philosophical method, can be digitised sufficiently to be described in a programming language.
And then it becomes a natural science and can be treated accordingly.
Again, nonsensical here is not a denigrating characterisation. Nonsensical means that there is no formal logic that describes the philosophical method of the Tractatus.
That could have been a logic of the machine that is simulating us.
However, remember, a machine may have cpuid, cpuinfo, and similar instructions. But the programmer knows which part of memory corresponds to them. For robots, they are indistinguishable from random places in memory. The Tractatus may be the cpuid.
I wonder how many people have written a commentary as extensive as mine, and had it deleted after reading this statement.
For me this clause is perfectly clear: do not try to create "digital court jury", "digital moral code", "aesthetics assessment algorithms", and such. Those things are subjective and ill-defined.
It's not that you cannot "create some incomplete model" of those domains. It is that those programs will start to eventually annoy your users by being outrageously wrong and exploitable. Like those tiny grids that you can glue onto your car's license plate to make it completely incomprehensible to image recognition systems.
All those "StackOverflow" "Rating, Carma, Voting"; Habr's "Carma, Rating", Facebook's "Likes" and "Emotions", are going to become humiliatingly fake and representing nothing very soon after being implemented.
Lockywolf
This review took 36 working hours.
This document is likely to become obsolete by ~2024.
Working with natural languages is hard. Many of us remember high school language classes as a horror.
Nevertheless, spellchecking is an expected application for computers. At some point I decided to make spell checking work where I need it to work.
Obviously, it turned out to be hard, and this document was created as a memo on how it was done.
The fields is very huge and heterogeneous, as Linux is an anarchic system, so I will limit the scope for myself with the following targets:
Languages:
Applications:
References:
There are, at the moment, essentially three approaches to checking spelling.
A lot of natural language processing theory has focused on the rule-based theory, until Internet giants accumulate enough data to train their models enough to outperform even the most advanced of models. Whenever you can, I suggest using Machine-Learning based methods.
However, when the task gets closer to implementing spelling rather than using it, rule-based methods still retain their validity, due to the fact that they are much easier to hook into your system. Naturally, people usually have no ability to train their own models.
There are services that let you submit your data for checking at remote services. This may be better for some cases – remote parties may have better resources for developing spellchecking methods. However, sometimes the Internet connection is not very good, and sometimes you are not allowed to share your texts.
A spell checker essentially consists of four components:
The difference between a dictionary and a dictionary package usually comes from the breakage of an abstraction barrier. Dictionary packages turn out to be Engine-specific. Even worse thing is when a dictionary package is application-specific (sadly, happens). And although converting between different dictionaries and dictionary packages is often not too hard, it requires work.
Things do not appear out of nowhere in this world. Everything is done by some people driven by different motives.
This section lists several projects that continue improving natural language support in computing.
The oldest and the most widespread engine. Slackware only has an English dictionary for it.
Supposedly the best engine for English, but Slackware has dictionaries for most languages.
Russian one is from Lebedev.
A library that is built into Hunspell.
Supposedly, the best spellchecker for all languages, except, maybe, English.
Package for Slackware.
A meta-checker that is ispell-compatible, but can use other engines. Should be used if the software allows you to choose a language.
~/.config/enchant/enchant.ordering
*:nuspell,hunspell,aspell,ispell en:aspell,hunspell,nuspell en_GB:aspell,hunspell,nuspell
Check for a dictionary:
enchant-lsmod-2 -list-dicts | grep ru
A grammar checker, supports English and Russian.
A great tool, actually. And the emacs package has great quality. Eats a lot of CPU, though.
Is on slackbuilds.org
Stylistics checker, quite strong.
https://gitlab.com/Lockywolf/lwfslackbuilds/-/tree/master/proselint
flycheck has buit-in support for proselint.
You need to enable it in Emacs, using
(flycheck-verify-setup)
on Slackware you need nodeenv, packaged at https://gitlab.com/Lockywolf/lwfslackbuilds/-/tree/master/nodeenv
Auxiliary (but enormous) checker for everything text.
DATE=$(date --iso) DNAME= "$DATE_textlint" mkdir -p ~/bin/ "$DNAME" source ~/bin/ "$DNAME/bin/activate" npm install --global textlint npm install --global textlint-rule-rousseau npm install --global textlint-rule-diacritics
This checks grammar using when you ask.
(use-package ispell ; :demand t :ensure t :hook (tex-mode-hook . ( lambda () ( setq ispell-parser 'tex))) :config ;; (ispell-change-dictionary "british-ise-w_accents" t) ;; aspell-specific ;; превед привет ( setf ispell-program-name (executable-find "aspell")) ;; This will not change the language automatically. You still have to select it manually. ( setq ispell-local-dictionary-alist '(( "ru-local" "[АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЬЫЪЭЮЯабвгдеёжзийклмнопрстуфхцчшщьыъэюя]" "[ ^ АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЬЫЪЭЮЯабвгдеёжзийклмнопрстуфхцчшщьыъэюя]" "[-]" nil ( "-d" "ru-yo") nil utf-8) ;; aspell ( "british-local" "[A-Za-z]" "[ ^ A-Za-z]" "[']" nil ( "-d" "en_GB-ise") nil utf-8))) (ispell-change-dictionary "british-local" t) ( setq ispell-silently-savep t) )
ispell-dictionary-alist
is probably not the best here.
In Russian files, switch the language by using ispell-change-dictionary
.
Should be configured in some weird way in order to check both languages.
(use-package flyspell :demand t :ensure t :hook ((text-mode . flyspell-mode) (prog-mode . flyspell-prog-mode)) :config (diminish 'flyspell-mode "🦋🧙") ( setf flyspell-use-meta-tab nil) ) (use-package flyspell-correct ; -ido :ensure t :demand t :bind ( :map flyspell-mode-map ( "C-;" . flyspell-correct-wrapper)) :init ( setq flyspell-correct-interface #'flyspell-correct-popup))
Works, but I would recommend trying spell-fu.
Only uses aspell, but supports what was flyspell-mode, and what was flyspell-prog-mode Also is very fast.
(use-package spell-fu :ensure t :demand t :config ( setf spell-fu-faces-exclude '(org-meta-line org-link org-code)) (global-spell-fu-mode) :bind (( "C-," . spell-fu-goto-next-error)))
Quite unfinished.
;;;; LanguageTool (use-package languagetool :ensure t :demand t :config ( setf languagetool-language-tool-jar "/usr/share/LanguageTool/languagetool-commandline.jar") ( setq languagetool-java-arguments '( "-Dfile.encoding=UTF-8")) ( setq languagetool-default-language "en-GB") ( setq languagetool-server-language-tool-jar "/usr/share/LanguageTool/languagetool-server.jar") (languagetool-server-start))
Add languagetool-server-mode
to your hooks, if you have a strong CPU.
Otherwise, use languagetool-check
.
(flycheck-verify-setup)
and enable the spelling checker there.
(use-package flycheck :demand t :ensure t :config (setf flycheck-textlint-executable "~/binary_software/2021-05-29_textlint/bin/textlint") (setf flycheck-textlint-config "~/.textlintrc") (add-to-list 'flycheck-checkers 'proselint) ;; also enabled somewhere in customize (global-flycheck-mode) (diminish 'flycheck-mode "🦋✓"))
This means that the links have not stabilised, and many features do not yet work.
This is a homepage of Vladimir Nikishkin (login name “lockywolf”). For book reviews and various notes that are not guaranteed to be finished, refer to the “Notes” entry in the menu. For semi-formal “Howtos” (not necessarily computer-related, but primarily so), refer to the “Howtos”.
The Blog (also linked at the top of the page) is probably a more reliable way to read my posts, chronologically. However, this site is also valuable as a backup, and for hosting drafts.
My Internet presence is mainly the following:
I am interested in cooperating (or chatting) in the following areas:
Here’s the gitlab link: https://gitlab.com/Lockywolf/scsh-xattr-mindmap
Contrary to the name, it is actually in Chibi, not in scsh. I initially thought that scsh would be better due to more exensive posix support, but it turned out to be that Chibi was good enough.
It is a small-ish (500 lines of code) script to generate a graph from your filesystem tree. It accepts a few options (editable directly at the file top) and duplicates quite a lot of the GNU Find functionality, but I didn’t find a way to avoid doing that, as it has to use heuristics in order to prune the tree to a reasonable size.
The resulting image is like this:
I plotted the Slarm64 (Unofficial Slackware for Raspberry Pi) repository tree, just for the demonstration purposes.
The size of the images above is 1x2.5 metres. It’s large, but my original goal was to plot my whole file system. The ’size=’ parameter is tunable. I think it is reasonable to assume that you need to have at least 4 square centimetres per node, so a graph that large would accommodate about 4000 nodes. In my opinion, 8000 is still possible, but too tight.
With the default settings the script ignores regular files, but traverses symlinks. In theory it also supports hardlinks, but you would need to turn on drawing regular files manually.
I made this script, because I started to feel that I am starting to forget what I have on my hard drive, that has amassed quite a lot of life history for the past 20 years. (Since hard drives became reasonably priced.)
Use-cases and pull requests welcome. One more reason to create this script was to prove that Scheme can be a practical programming language.
Technologically, this code is not terribly advanced, the only trick that may be interesting to programming nerds is having the r7rs module and the main function in the same file (like scsh/scheme48 suggest doing), which requires procedural module analysis.
I had to glue on a couple of C bindings for sys/xattr.h, those are now available at the Snow-Fort repo. Those are Chibi-specific.
Hope you will enjoy it.
Этот пост представляет собой перевод анонса Международной Научно-Прикладной конференции Scheme Workshop 2021.
Для тех, кто не ходит под кат:
The 2021 Scheme and Functional Programming Workshop is calling for submissions.
Объявлена подача заявок на Международную Конференцию по Функциональному Программированию и Языку Scheme.
We invite high-quality papers about novel research results, lessons learned from practical experience in industrial or educational setting, and even new insights on old ideas. We welcome and encourage submissions that apply to any language that can be considered Scheme: from strict subsets of RnRS to other “Scheme” implementations, to Racket, to Lisp dialects including Clojure, Emacs Lisp, Common Lisp, to functional languages with continuations and/or macros (or extended to have them) such as Dylan, ECMAScript, Hop, Lua, Scala, Rust, etc. The elegance of the paper and the relevance of its topic to the interests of Schemers will matter more than the surface syntax of the examples used. Topics of interest include (but are not limited to):
На конференцию приветствуется подача докладов, отчётов и статей о новых научных результатах, практическом опыте их применения в индустрии и образовании, и даже творческое переосмысление старых идей. Приветствуется труды, посвящённые языкам, в какой-либо степени родственных Scheme: как строго подчиняющимся стандартам серии RnRS, так и более вольно их трактующим, но всё ещё считающих себя Scheme; а также иным диалектам Lisp, Racket, Common Lisp, иным функциональным языкам, поддерживающим замыкания и/или макросы (в том числе в качестве расширений), например, Dylan, ECMAScript, Hop, Lua, Scala, Rust… Элегантность статьи и содержательная ценность имеют больший вес при принятии решения о принятии, чем конкретный синтаксис примеров.
Конкретные темы включают (но не ограничиваются):
Interaction: program-development environments, debugging, testing, refactoring
Implementation: interpreters, compilers, tools, garbage collectors, benchmarks
Extension: macros, hygiene, domain-specific languages, reflection, and how such extension affects interaction.
Expression: control, modularity, ad hoc and parametric polymorphism, types, aspects, ownership models, concurrency, distribution, parallelism, non-determinism, probabilism, and other programming paradigms
Integration: build tools, deployment, interoperation with other languages and systems
Formal semantics: Theory, analyses and transformations, partial evaluation
Human Factors: Past, present and future history, evolution and sociology of the language Scheme, its standard and its dialects
Education: approaches, experiences, curricula
Applications: industrial uses of Scheme
Scheme pearls: elegant, instructive uses of Scheme
Important dates Даты:
Submission deadline is 26 June 2021. Финальная дата подачи докладов 26 июня 2021. Authors will be notified by 12 July 2021. Авторы будут уведомлены о прохождении рецензирования 12 июля 2021. Camera-ready versions are due 21 July 2021. Финальные версии докладов (после рецензирования), должны быть предоставлены 21 июля 2021. All deadlines are (23:59 UTC-12), “Anywhere on Earth”. Все даты указаны в часовом поясе (23:59 UTC-12) «в любой точке планеты». Workshop will be held online 27 August 2021 Конференция будет проводиться онлайн, 27 августа 2021.
Submission Information Подача докладов
Paper submissions must use the format acmart and its sub-format sigplan. They must be in PDF, printable in black and white on US Letter size. Microsoft Word and LaTeX templates for this format are available at:
Бумажные доклады дожны использовать формат acmart, подформат sigplan. Подача осуществляется в PDF, допускающем печать в чёрно-белом формате на бумаге US Letter. Шаблоны для Microsoft Word и LaTeX доступны по адресу:
http://www.sigplan.org/Resources/Author/
This format is in line with ACM conferences (such as ICFP with which we are colocated). It is recommended to use the review option when submitting a paper; this option enables line numbers for easy reference in reviews.
Этот формат согласуется с прочими ACM-конференциями, включая ICFP, которая проходит одновременно. Рекомендуется при подаче заявки включить опцию «review», это позволят нумеровать строки автоматически и упрощает рецензирование.
We want to encourage all kinds of submissions, including full papers, experience reports and lightning talks. Papers and experience reports are expected to be 10–24 pages in length using the single-column SIGPLAN acmart style. (For reference, this is about 5–12 pages of the older SIGPLAN 2-column 9pt style.) Abstracts submitted for lightning talks should be limited to 192 words. Each accepted paper and report will be presented by its authors in a 25 minute slot including Q&A. Each accepted lightning talk will be presented by its authors in a 5 minute slot, followed by 5 minutes of Q&A.
Приветствуются любые виды докладов, включая традиционные статьи, технические отчёты, и блиц-доклады. Статьи и отчёты обычно занимают 10-24 страницы «нового» ACM SIGPLAN, свёрстанного в одну колонку. (Для сравнения, это примерно 5-12 страниц «старого», двухколоночного ACM SIGPLAN 9pt). Анонсы блиц-докладов ограничены в объёме 192 словами. На каждую статью и отчёт выделяется 25 минут, включая вопросы докладчику. На каждый блиц-доклад выделяется 5 минут, а также 5 минут на вопросы докладчику.
The size limits above exclude references and any optional appendices. There are no size limits on appendices, but the papers should stand without the need to read them, and reviewers are not required to read them.
Данные ограничения не включают список литературы и прочие приложения. Приложения не ограничены в размере, но доклады должны быть понимаемы без них; также приложения могут быть оставлены без рецензирования.
Authors are encouraged to publish any code associated to their papers under an open source license, so that reviewers may try the code and verify the claims.
Публикация ассоциированного с докладами кода под условиями «лицензий открытого кода» приветствуется, с целью того, чтобы рецензенты могли запустить код и проверить его сделанные в докладе утверждения.
Proceedings will be published as a Technical Report at Northeastern University and uploaded to arXiv.org.
Труды конференции будут опубликованы в формате «Технический Отчёт» Северо-Восточного Университета (США, NorthEastern University), а также на сервере препринтов arXiv.org.
Publication of a paper at this workshop is not intended to replace conference or journal publication, and does not preclude re-publication of a more complete or finished version of the paper at some later conference or in a journal.
Презентация доклада на конференции не ставит себе целью заменить публикацию статьи (усовершенствованной редакции) в научном журнале или научной конференции более высокого уровня.
Reviewing Process Процесс рецензирований
Scheme 2021 will use lightweight-double-blind reviewing. Submitted papers must omit author names and institutions and reference the authors’ own related work in the third person (e.g., not “we build on our previous work…” but rather “we build on the work of…”).
На конференции Scheme 2021 будет применяться метод lightweight-double-blind (облегчённого двойного слепого) рецензирования. Поданные доклады не должны включать имена авторов и учреждений, и должны ссылаться на иные работы в третьем лице. (То есть, не «мы используем наш предыдущий результат…», а «мы пользуемся результатами, опубликованными Х»)
The purpose is to help the reviewers come to an initial judgement about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult (e.g., important background references should not be omitted or anonymized). Formatting Information
Такая методика ставит своей целью помочь рецензентам оценить статью без предубеждения, но не исключить установление рецензентами авторства принципиально. Не следует во имя анонимности предпренимать какие-нибудь действия, могущие ухудшить качество подаваемой заявки или затруднить рецензирование. (Ссылки на предыдущие работы не следует опускать или анонимизировать.)
Full papers and experience reports should use the sigplan option to acmart. Lightning talks can be submitted as either a text file or a PDF file. It is recommended to use the anonymous and review options to acmart when submitting a paper; these options hide the author names and enable line numbers for easy reference in review.
Статьи и отчёты должны использовать формат «acmart» с опцией «sigplan». Блиц-доклады могут быть поданы в формате обычного текста, либо как PDF-файл. Рекомендуется также использовать опции «anonymous» и «review», эти опции автоматически скрывают имя автора и включают нумерацию строк для упрощения рецензирования.
Submission Link Ссылка на интерфейс подачи заявки
We will post the submission link closer to the deadline. Будет опубликована ближе к дате подачи заявки
I write a free-form diary in copybook. This document describes a simple notation dedicated to make it a little more structured. It involves certain equipment, that is not that hard to find though. You need:
This sketch is not (yet) implemented in software, it is just an imaginary construct that may be useful for reference.
@startuml skinparam componentStyle uml2 header some header footer some footer title A plan for data flow in 2021 caption Unfinished. Version <2021-02-01 Mon 15:49> legend Unfinished TODO: 1. Style services 2. Style crons 3. Style setups 4. Style TODO/DONE 5. Style friendly/adversarial services 6. Mark points of human intervention (e.g. captcha) 7. Make components consistent end legend package "Foreign Services" { component "Facebook" as cFb component "Twitter" as cTw component "Instagram" as cIn component "Telegram" as cTg component "Vkontakte" as cVk component "WeChat" as cWc component "Discord" as cDi component "Jabber" as cJa component "TikTok" as cTt component "DouYin" as cDy component "WeiBo" as cWb component "LinkedIn" as cLi component "Skype" as cSk component "LiveJournal" as cLj component "WordPress" as cWp component "Blogger" as cBg component "WhatsApp" as cWa component "AcademiaEdu" as cAe component "ResearchGate" as cRg } cloud "Assorted Providers <>" as cAP package DigitalOcean{ database "Dovecot (qmail?)" as dbDovecot node "RSS-Bridge" as cRB node "TT-RSS" as cTTRSS database "PostgreSQL" as dbPG node "Panopticon (TODO)\nContacts manager" as cPan } cFb -> cRB : News cTw -> cRB : News cIn -> cRB : News cTg -> cRB : News cVk -> cRB : News cWc -> cRB : News cAP --> dbDovecot : Via for-site-email@domain.name dbDovecot --> cRB cRB --> cTTRSS cloud "RSS" as cRSS cRSS --> cTTRSS cTTRSS --> dbPG cFb -> cPan : Contacts cTw -> cPan : Contacts cIn -> cPan : Contacts cTg -> cPan : Contacts cVk -> cPan : Contacts cWc -> cPan : Contacts package Google { database "Google Contacts" as dbGcontacts database "Gmail" as dbGmail } cPan -> dbGcontacts dbGcontacts -> cRB : Feed information dbGcontacts --> dbGmail : Context provider dbGmail --> dbGcontacts : Other contacts miner package Android { database "Android\nAddress\nBook" as dbAndroidContacts database "Androind\nGmail" as dbAndroidGmail } dbGcontacts <--> dbAndroidContacts dbGmail <--> dbAndroidGmail package Laptop { database "vdir" as dbVdir node "khard" as cKhard node "mu" as cMu database "maildir" as dbMaildir component "Thunderbird" as cTb cTb <--> dbGcontacts cTb <--> dbGmail package Emacs { node "ebdb" as cEbdb node "mu4e" as cMu4e } } dbGcontacts <--> dbVdir : vdirsyncer cKhard<--> dbVdir cEbdb<--> dbVdir dbGmail <--> dbMaildir : mbsync dbMaildir <--> cMu cMu <--> cMu4e @enduml
At some point in my life I decided that I need to know programming well in order to navigate the modern world.
This page describes the project that I undertook in 2019-2020, with the goal of modernising computer science education. Its purpose is to collect in one place links to all artefacts that were produced during the project execution, with commentary.
The object of study was one of the most famous programming problem textbooks, the "Structure and Interpretation of Computer Programs", Second Edition, by Abelson, Sussman and Sussman. At the moment of writing of this document, 24 years have passed since the release of SICP, and many things may change in computing in such a time frame. Hence, apart from looking at the book itself, it was interesting how much effort would be additionally required in order to replicate the experience of the era when the book was still fresh.
This documents tells the story of this endeavour. Initially, there was no plan for making any kind of comprehensive post-mortem of the project. However, the project turned out so long, so labour-intensive, and eventually, so fruitful, that making a summary has arisen to be necessary, even if for myself and my readers only, and only to list artefacts.
In short, I have solved all of the SICP's problem set in a consistent and reproducible fashion, using modern software, filling the gaps in supporting software and libraries, and documenting as much of my work as possible, in particular how much time every problem required, and how much external help was requested.
I tried going through the book honestly, without cutting any corners whatsoever. This would give me a chance to learn the subject as it is presented, along with all the technologies that the book may not touch directly but which would prove to be necessary.
As a result, I have produced, quite unexpectedly, a far greater amount of artefacts than I had expected.
Some philosophers would argue that people in general live by producing stories about themselves, and telling those stories to other people. This is my final "story" of this project, containing links to all the sub-stories of the project.
Interested readers are invited to read the Experience Report, as published in the Proceedings of the International Conference on Functional Programming 2020, archived as the University of Michigan Technical Report CSE-TR-001-21.
In short, it required me 729 hours to go through the SICP.
In order to find time, as a working person, I needed a more systematic approach to time management. Since I had already chosen org-mode as the solution medium, using org's time-management facilities was an obvious choice.
Following the general principle that learning something is the easiest when you are teaching it, I organised a seminar/lecture. This lecture was well-received by the audience, because it was given during the time of the corona-virus pandemic of 2020, so quite a lot of people found themselves locked in their home, and in the need to structure their time more efficiently.
This approach is not unlike the one outlined by John Lees-Miller ( https://github.com/jdleesmiller) in his article " How a Technical Co-founder Spends his Time".
There is no presentation file, because it was a hands-on tutorial given right into the time management software.
In short, I have solved all of the problem set, measuring how much time every problem required. Furthermore, I had to write and publish several libraries, in order for the solution to be runnable on modern Schemes, and be as portable between them, as possible.
Eventually, there were two artefacts produced from the solution of SICP:
The org-version is strongly preferred to the pdf version, because the pdf version requires certain uncanny tricks to get built, has no added value and is 5000 pages long.
To use the org-mode one, you need chibi-scheme of a sufficiently recent version, ImageMagick, as well as GNU Fortran for the last two exercises. Emacs is also strongly recommended.
During the solution process, one of the exercises requires writing a Scheme interpreter in "a low level language of your choice". In my case this choice happened to be Fortran, partly due to a relatively greater popularity of Fortran in 1996, partly due to relatively straightforward memory management. It is a toy implementation, not recommended for any serious applications. It's likely leaking memory and compares symbols in \(O(n)\).
Scheme Requests for Implementation is the Scheme's equivalent of XEPs or PEPs or JCPs. In order to make possible working with graphics in Scheme, I had to implement several interfaces assumed to be "given" in SICP. The graphics sub-library found its place as SRFI-203:
I also had to write a support library that packages the functions that are already available on modern schemes. (Unlike SRFI-203, which implemented an unportable subset.)
There review process has been started on 2020-11-15. There is also a Lay Summary in English, and in Russian.
Since this took so long and made me think too much, I decided to analyse the solution process and to document it for future reference. The result of this analysis ended up being substantive enough for a whole "scientific" paper, and was later presented at ICFP 2020, Scheme Track.
The papers were "published online", which means that ACM is going to maintain the website with a lay summary for a while, and hope that the papers will be mirrored by the major publishers… I guess.
In any case, below you can find the:
Every "scientific" paper nowadays needs a supplementary paper to explain what it is actually about. This report is not an exception.
There are things that divide your live into "before" and "after".
I had originally planned to write a review on SICP within the loosely defined series of book reviews that had started with two books on writing Scientific Software, one by Rouson and Xu, and the other one by Oliveira and Stewart.
I was naive. SICP proved to be a significantly more influential material… I had started writing a review several times, but failed at each of those, and eventually ended up just creating this file, where I just listed the artefacts created in process.
I guess, some things just are too big to have a review written about them. Maybe it is a peculiar particular case of the uncertainty principle. You can write a review as long as the length of the process you are writing about is short compared to the length of your life. Then you can truly "see the object through the lens of your life". However, if the process takes too much time and effort that it stops being small compared to your life, writing a review becomes similar to a process of solving a coupled system of equations. The "review" would be an image of the object seen through itself.
So at the end of the day I just decided to attach the two previous review attempts to this document, and let them be. After all, perhaps every nice project must leave something undone… just as a hook in the memory.
This story started a long time ago, in 2013, when I just entered a Post-Grad programme at one world-renown university. I was writing a study program for myself, aimed at filling the gaps that the previous education had left, as well as outlining the potential future research and engineering directions.
Functional programming had been on my list of interesting things for a long time, and I even had a book in mind, one highly regarded at one computer enthusiasts community that I had been a part of. The book was called "Practical Common Lisp", and it got my attention when a few of the frequent visitors of an online community I used to skim through at that time were making fun of the idea that anything Lisp-related may be to any extent practical.
There was, however, and additional level of impracticality. (Evil people rumour that there are many more, but I refuse to acknowledge their existence.) The hint was give in the name of the book discussed above. If that Lisp was "common", there must as well be "uncommon" one, right?
Indeed, such an uncommon Lisp exists. (In fact there are several.) The uncommon Lisp is called "Scheme", and has kind of an unhappy reputation. Practical programmers often say that it is too academic, whereas academics make a wry face, and say that it's a great language, and they would have written a lot of code in it, unless they had been so busy doing Science.
Science… indeed, science was the thing that brought me to Scheme. Apart from the ever-chasing me feeling, "what is it that they are actually talking about", I had been pointed towards Scheme by marketing.
It may seem strange, but the background is the following: I used to write a sizeable amount of neural-network and other statistics related code in Python and TensorFlow and Theano. Most of it was just exercises, but they gave me that pythonic experience that I had needed for a long time, but didn't have an opportunity to get.
And the most evident thing when using TensorFlow was that it in no sense actually fits into the model of being used through Python. The computational graphs and the delayed evaluation were almost by themselves begging for being an organic feature of a language instead of being an artificially plugged-in entity.
And then I remembered a feature from ages ago. I remembered a children's book (who can guess the name?) which was giving an entertaining introduction into computing; the book was classifying programming languages by their attribution to different tasks, and Lisp was ennobled with the "for Artificial Intelligence" title. Hmm, I thought. Why don't I find out what those people actually meant by Artificial Intelligence back then, in the Golden Era? Additionally, we had a course on functional programming in the Uni, and we were supposed to have something like an embedding Lisp into a Categorical Abstract Machine as one of the course assignments. Why don't I remember a thing from that course, I wonder?
So Scheme comes into play with several kinds of an impetus. The Artificial Intelligence of the day, childhood memories, friends' recommendations, university memories.
How did I choose the book? To my greatest shame, I don't remember. I remember someone saying that people who studied through Structure and Interpretation of Computer Programs were the best programmers he (or she?) had ever met. Who was this friend? My memory fails me. A man or a woman? Online or offline? It's a shame to loose memory at such a young age.
I started reading the book several years later, in 2016.
In fact, I have read four chapters out of five, ignoring the last chapter, dealing with what I saw as assembly language tricks that I supposedly had already learnt back when it was my first university year.
Frankly speaking, that reading was perhaps worth doing once because it taught me that by only reading a book the chances of learning anything are close to zero.
I had remember spending a couple of weeks staring at my phone screen (I did read the book from a phone), and I had remembered a few function names, such as display
, but almost nothing else.
Reading SICP was one of those things that kind of separate your life into before and after. It is strangely hard to say why. I mean, I could iterate over a number of reasons: Because it shows you almost the whole multi-layer structure of the computer world, from the silicon to the command line? (And even a little bit about graphics.) Because it makes you approach every piece of software with a grain of salt? That is, it makes you, at one hand, see how crappy software is even when the developers have spent a lot of effort on making it look like it isn't, and on the other hand, makes you feel the itching that is telling you "I would have fixed this bug faster than they would even understand what I am talking about"? Maybe because it, by hiding all the gory details that are unnecessary for the narrative under the carpet, hangs all this giant bag of heuristics right onto your neck, and makes yourself responsible for it?
Or maybe because it makes you feel how little you actually know, and possibly even never be able to know much more?
All of the above is true, but there seems to be still something that is hard to enunciate.
Anyway, for myself it was very important because it was one of the few things that I did almost without any imitation.
Imitation plays such a large role in the modern life. It's strange to write such a platitudinous phrase in a personal blog describing a personal experience, especially the one talking this much about seeking the truth.
Computers make it astonishingly easy to tell lies. They also make impossible possible, and let you see the unobvious truth where it is otherwise hard to see, however this requires effort, whereas lying happens almost naturally in the computer world.
The biggest lie of computing is, perhaps, that computing is easy. It is not. No matter what Larry Wall or Guido van Rossum tell you.
SICP makes you feel this difficulty to a full extent. SICP is by no means the only book on programming in Scheme or programming artificial intelligence. The other ones are also not really easy, but they are not deliberately hard.
Is it a general rule that good textbooks are always deliberately hard? Landau-Lifshits comes to mind. I always hated it, because it just readily ignores a lot of under-the-carpet problems with the story it tells.
But maybe there is actually something in it? The textbooks almost grow in difficulty from "hard because badly written" to "hard because huge" to "easy because well written" and finally to "hard because the authors deliberately nudge you into thinking about something"?
When I was younger, I believed in "learning by attending classes", then in "learning by listening", then in "learning by reading…", I even used to be at "learning by doing".
Now I am at the stage of "learning by writing".
I guess, the next stage is "learning by teaching", and it all ends with "you're still be a fool at your deathbed".
It is relatively easy to make a programming language that is only capable of running on a single machine, or only on machines of a single product line. In fact, making machine languages is a routine exercise in universities offering majors in computing. Even I did not evade this exercise, even though I was not a low-level programming major.
This, however, although pleasurable to machine engineers, limits the practicality of the language itself, as we usually want it to be useful on as many machines as possible.
So far, only the C language has really managed to maintain a noticeable connection to the machine hardware, while still remaining a popular option among programmers. Most of the other languages embrace portability.
In fact, they embrace it so much, that the actual notion of machine parts remains nothing more than a nuisance for ordinary programmers, so huge is the abstraction gap. These programmers reason with everyday things, such as texts, pictures, thoughts.
Let me say, that in my view, SICP is a book that tries to build the bridge between the "reasoning about everyday things" and "reasoning about electric signals". And exactly because the gap is so wide, the authors were faced with a difficult choice.
This report is written as a post-mortem of a project that has, perhaps, been the author’s most extensive personal project: creating a complete and comprehensive solution to one of the most famous programming problem sets in the modern computer science curriculum “Structure and Interpretation of Computer Programs”, by Abelson, Sussman, and Sussman ([ 2]).
It measures exactly:
It suggests:
The solution is published online (the source code and pdf file):
This report (and the data in the appendix) can be applied immediately as:
Additionally, a time-tracking data analysis can be reproduced interactively in the org-mode version of this report. (See: Appendix: Emacs Lisp code for data analysis)
Programming language textbooks are not a frequent object of study, as they are expected to convey existing knowledge. However, teaching practitioners, when they face the task of designing a computer science curriculum for their teaching institution, have to base their decisions on something. An “ad-hoc” teaching method, primarily based on studying some particular programming language fashionable at the time of selection, is still a popular choice.
There have been attempts to approach course design
with more rigour. The “Structure and Interpretation of Computer Programs” was created as a result of such an attempt. SICP was
revolutionary for its time, and perhaps can be still considered
revolutionary nowadays. Twenty years later, this endeavour was analysed by Felleisen in a paper “Structure and Interpretation of Computer Science Curriculum” ([ 14]). He then reflected upon the benefits and drawbacks of the deliberately designed
syllabus from a pedagogical standpoint. He proposed what he believes to be a pedagogically superior successor to the first generation of deliberate
curriculum. (See: “How to Design Programs” (HTDP) [ 15])
Leaving aside the pedagogical quality of the textbook (as the author is not a practising teacher), this report touches a different (and seldom considered!) aspect of a computer science (and in general, any other subject’s) curriculum. That is,precisely, how much work is required to pass a particular course.
This endeavour was spurred by the author’s previous experience of learning about partial differential equations through a traditional paper-and-pen based approach, only mildly augmented with a time-tracking software. But even such a tiny augmentation already exposed an astonishing disparity between a declared laboriousness of a task and the empirically measured time required to complete it.
The author, therefore, decided to build upon the previous experience and to try and design as smooth, manageable, and measurable approach to performing university coursework, as possible. A computer science subject provided an obvious choice.
The solution was planned, broken down into parts, harnessed with a software support system, and executed in a timely and measured manner by the author, thus proving that the chosen goal is doable. The complete measured data are provided. Teaching professionals may benefit from it when planning coursework specialised to their requirements.
More generally, the author wants to propose a comprehensive reassessment of university teaching in general, based on empirical approaches (understanding precisely how, when, and what each party involved in the teaching process does), in order to select the most efficient (potentially even using an optimisation algorithm) strategy when selecting a learning approach for every particular student.
The author wanted to provide a solution that would satisfy the following principles:
These principles need an explanation.
The author considers completeness to be an essential property of every execution of a teaching syllabus.
In simple words, what does it mean “to pass a course” or “to learn a subject” at all? How exactly can one formalise the statement “I know calculus”? Even simpler, what allows a student to say “I have learnt everything that was expected in a university course on calculus”?
It would be a good idea to survey teachers, students, employers, politicians and random members of the community to establish what it means for them that a person “knows a subject”.
Following are some potential answers to these questions:
Any combination of these can also be chosen to signify the “mastering” of a subject, but the course designer is then met with a typical goal-attainment, multi-objective optimisation problem ([ 18]); such problems are still usually solved by reducing the multiple goals to a single, engineered goal.
Looking at the list above from a “Martian point of view” ([ 5]), we will see that all the goals listed above are reducible to a single “completing coursework” goal. “Completing coursework” is not reducible to any of those specific sub-goals in general, so the “engineered goal” may take the shape of a tree-structured problem set (task/subtask). “Engineered” tasks may include attending tutorials, watching videos and writing feedback.
Moreover, thinking realistically, doing coursework often is the only way that a working professional can study without altogether abandoning her job.
Therefore, choosing a computer science textbook that is known primarily for the problem set that comes with it, even more than for the actual text of the material, was a natural choice.
However, that is not enough, because even though “just solving all of the exercises” may be the most measurable and the most necessary learning outcome, is it sufficient?
As the author intended to “grasp the skill” rather than just “pass the exercises”, he initially considered inventing additional exercises to cover parts of the course material not covered by the original problem set.
For practical reasons (in order for the measured data to reflect the original book’s exercises), in the “reference solution” referred to in this report’s bibliography, the reader will not find exercises that are not a part of the original problem set.
The author, however, re-drew several figures from the book, representing those types of figures that are not required to be drawn by any of the exercises.
This was done in order to “be able to reproduce the material contained in the book from scratch at a later date”. This was done only for the cases for which the author considered the already available exercises insufficient. The additional figures did not demand a large enough amount of working time to change the total difficulty estimate noticeably.
One common objection to the undertaken endeavour may be the following. In most universities (if not all), it is not necessary to solve all exercises in order to complete a course. This is often true, and especially true for mathematics-related courses (whose problem books usually contain several times more exercises than reasonably cover the course content). The author, however, considers SICP exercises not to be an example of such a problem set. The exercises cover the course material with minimal overlap, and the author even considered adding several more for the material that the exercises did not fully cover.
Another objection would be that a self-study experience cannot faithfully imitate a university experience at all because a university course contains tutorials and demonstrations as crucial elements. Problem-solving methods are “cooked” by teaching assistants and delivered to the students in a personalised manner in those tutorials.
This is indeed a valid argument. However, teaching assistants may not necessarily come from a relevant background; they are often recruited from an available pool and not explicitly trained. For such cases, the present report may serve as a crude estimate of the time needed for the teaching assistants to prepare for the tutorials.
Furthermore, many students choose not to attend classes at all either because they are over-confident, or due to high workload. For these groups, this report may serve similarly as a crude estimate.
Moreover, prior research suggests that the learning outcome effect of class attendance on the top quartile (by grade) of the students is low. ([ 9] and [ 21])
For the student groups that benefit most from tutorials, this report (if given as a recommended reading for the first lesson) may serve as additional evidence in favour of attendance.
Additionally, nothing seems to preclude recording videos of tutorials and providing them as a supplementary material at the subsequent deliveries of the course. The lack of interactivity may be compensated for by a large amount of the material (such as the video recordings of questions and answers) accumulated through many years and a well-functioning query system.
It is often underestimated how much imbalance there is between a teacher and a pupil. The teacher not only better knows the subject of study – which is expected– but is also deciding how and when a student is going to study. This is often overlooked by practitioners, who consider themselves simply as either as sources of knowledge or, even worse, as only the examiners. However, it is worth considering the whole effect that a teacher has on the student’s life. In particular, a student has no other choice than to trust the teacher on the choice of exercises. A student will likely mimic the teacher’s choice of tools used for the execution of a solution.
The main point of the previous paragraph is that teaching is not only the process of data transmission. It is also the process of metadata transmission, the development of meta-cognitive skills. (See [ 22]) Therefore, meta-cognitive challenges, although they may very well be valuable contributions to the student’s “thinking abilities”, deserve their own share of consideration when preparing a course.
Examples of meta-cognitive challenges include:
An additional challenge to the learning process is the lack of peer support. There have been attempts by learning institutions to encourage peer support among students, but the successfulness of those attempts is unclear. Do students really help each other in those artificially created support groups? Inevitably, communication in this those groups will not be limited only to the subject of study. To what extent does this side-communication affect the learners?
A support medium is even more critical for adult self-learners, who do not get even those artificial support groups created by the school functionaries and do not get access to teaching assistance.
It should be noted that the support medium (a group chat platform, or a mailing list) choice, no matter how irrelevant to the subject itself it may be, is a significant social factor.
This is not to say that a teacher should create a support group in whatever particular social medium that happens to be fashionable at the start of the course.
This is only to say that deliberate effort
should be spent on finding the best support configuration.
In the author’s personal experience:
It should be noted that out of those communities, only the Open Data Science community, and a small Haskell community reside in “fashionable” communication systems.
The summary of the community interaction is under the “meta-cognitive” exercises section because the skill of finding people who can help you with your problems is one of the most useful soft skills and one of the hardest to teach. Moreover, the very people who can and may answer questions are, in most situations, not at all obliged to do so, so soliciting an answer from non-deliberately-cooperating people is another cognitive exercise that is worth covering explicitly in a lecture.
Repeating the main point of the previous paragraph in other words: human communities consist of rude people. Naturally, no-one can force anyone to bear rudeness, but no-one can force anyone to be polite, either. The meta-cognitive skill of extracting valuable knowledge from willing but rude people is critical but seldom taught.
The author considers it vital to convey to students, as well as to teachers, the following idea: it is not the fashion, population, easy availability, promotion, and social acceptability of the support media that matters. Unfortunately, it is not even the technological sophistication, technological modernity or convenience; it is the availability of information and the availability of people who can help.
Support communication was measured by the following:
The author did not collect measures of other communication means.
Several figures from SICP were re-drawn using a textual representation. The choice of figures was driven by the idea that someone who successfully completed the book should also be able to re-create the book material and therefore should know how to draw similar diagrams. Therefore, those were chosen to be representative of the kinds of figures not required to be drawn by any exercise.
The list of re-drawn figures:
(cons 1 2)
.The final choice of tools turned out to be the following:
Chibi-Scheme was virtually the only scheme system claiming to fully support the latest Scheme standard, r7rs-large (Red Edition), so there was no other choice.
This is especially true when imagining a student unwilling to go deeper into the particular curiosities of various schools of thought, responsible for creating various partly-compliant Scheme systems.
Several libraries (three of which were standardised, and three of which were not) were used to ensure the completeness of the solution.
Effectively, it is not possible to solve all the exercises using only the standardised part of the Scheme language.
Even Scheme combined with standardised extensions is not enough.
However, only one non-standard library was strictly required: (chibi process)
, which served as a bridge between Scheme and the graphics toolkit.
git is not often taught in schools. The reasons may include the teachers’ unwillingness to busy themselves with something deemed trivial or impossible to get by without, or due to them being overloaded with work. However, practice often demonstrates that students still too often graduate without yet having a concept of file version control, which significantly hinders work efficiency. Git was chosen because it is, arguably, the most widely used version-control system.
ImageMagick turned out to be the easiest way to draw images consisting of simple straight lines.
There is still no standard way to connect Scheme applications to applications written in other languages.
Therefore, by the principle of minimal extension, ImageMagick was chosen, as it required just a single
non-standard Scheme procedure.
Moreover, this procedure (a simple synchronous application call) is likely to be the most standard interoperability primitive invented.
Almost all operating systems support applications executing other applications.
PlantUML is a code-driven implementation of the international standard of software visualisation diagrams. The syntax is straightforward and well documented. The PlantUML-Emacs interface exists and is relatively reliable. The textual representation conveys the hacker spirit and supports easy version control. UML almost totally dominates the software visualisation market, and almost every university programming degree includes it to some extent. It seemed, therefore very natural (where the problem permitted) to solve the “diagramming” problems of the SICP with the industry-standard compliant diagrams.
Graphviz was used in an attempt to use another industry standard for solving diagramming problems not supported by the UML.
The dot
package benefits from being fully machine-parsable and context-independent even more than UML. However, it turned out to be not as convenient as expected.
TikZ is practically the only general-purpose, code-driven drawing package. So, when neither UML nor Graphviz managed to embed the complexity of the models diagrammed properly, TikZ ended up being the only choice. Just as natural an approach could be to draw everything using a graphical tool, such as Inkscape or Adobe Illustrator. The first problem with the images generated by such tools, though, is that they are hard to manage under version control. The second problem is that it was desirable to keep all the product of the course in one digital artefact (i.e., one file). Single-file packaging would reduce confusion caused by the different versions of the same code, make searching more straightforward, and simplify the presentation to a potential examiner.
gfortran, or GNU Fortran, was the low-level language of choice for the last two problems in the problem set. The reasons for choosing this not very popular language were:
GNU Unix Utilities the author did not originally intend to use these, but diff
turned out to be extremely effective for illustrating the differences between generated code pieces in Chapter 5. Additionally, in some cases, they were used as a universal glue between different programs.
GNU Emacs is, de facto, the most popular IDE among Scheme users, the IDE used by the Free Software Foundation founders, likely the editor used when writing SICP, also likely to be chosen by an aspiring freshman to be the most “hacker-like” editor. It is, perhaps, the most controversial choice, as the most likely IDE to be used by freshmen university students, in general, would be Microsoft Visual Studio. Another popular option would be Dr.Racket, which packages a component dedicated to supporting solving SICP problems. However, Emacs turned out to be having the best support for a “generic Lisp” development, even though its support for Scheme is not as good as may be desired. The decisive victory point ended up being the org-mode (discussed later). Informally speaking, entirely buying into the Emacs platform ended up being a substantial mind-expanding experience. The learning curve is steep, however.
As mentioned above, the main point of this report is to supply the problem execution measures for public use. Later sections will elaborate on how data collection about the exercise completion was performed, using org-mode’s time-tracking facility. The time-tracking data in the section 8 do not include learning Emacs or org-mode. However, some data about these activities were collected nevertheless:
Reading the Emacs Lisp manual required 10 study sessions of total length 32 hours 40 minutes. Additional learning of Emacs without reading the manual required 59 hours 14 minutes.
Org-mode helps to resolve dependencies between exercises. SICP provides an additional challenge (meta-cognitive exercise) in that its problems are highly dependent on one another. As an example, problems from Chapter 5 require solutions to the successfully solved problems of Chapter 1. A standard practice of modern schools is to copy the code (or other forms of solution) and paste it into the solution of a dependent exercise. However, in the later parts of SICP, the solutions end up requiring tens of pieces of code written in the chapters before. Sheer copying would not just blow up the solution files immensely and make searching painful; it would also make it extremely hard to propagate the fixes to the bugs discovered by later usages back into the earlier solutions.
People familiar with the work of Donald Knuth will recognise the similarity of org-mode with his WEB system and its web2c implementation. Another commonly used WEB-like system is Jupyter Notebook (See [ 29]).
Org-mode helps package a complete student’s work into a single file. Imagine a case in which student needs to send his work to the teacher for examination. Every additional file that a student sends along with the code is a source of potential confusion. Even proper file naming, though it increases readability, requires significant concentration to enforce and demands that the teacher dig into peculiarities that will become irrelevant the very moment after he signs the work off. Things get worse when the teacher has not just to examine the student’s work, but also to test it (which is a typical situation with computer science exercises.)
Org-mode can be exported into a format convenient for later revisits. Another reason to carefully consider the solution format is the students’ future employability. This problem is not unfamiliar to the Arts majors, who have been collecting and arranging “portfolios” of their work for a long time. However, STEM students generally do not understand the importance of a portfolio. A prominent discussion topic in job interviews is, “What have you already done?”. Having a portfolio, in a form easily presentable during an interview, may be immensely helpful to the interviewee.
A potential employer is almost guaranteed not to have any software or equipment to run the former student’s code. Even the student himself would probably lack a carefully prepared working setup at the interview. Therefore, the graduation work should be “stored”, or “canned” in a format as portable and time-resistant as possible.
Unsurprisingly, the most portable and time-resistant format for practical use is plain white paper. Ideally, the solution (after being examined by a teacher) should be printable as a report. Additionally, the comparatively (in relation to the full size of SICP) small amount of work required to turn a solution that is “just enough to pass” into a readable report would be an emotional incentive for the students to carefully post-process their work. Naturally, “plain paper” is not a very manageable medium nowadays. The closest manageable approximation is PDF. So, the actual “source code” of a solution should be logically and consistently exportable into a PDF file. Org-mode can serve this purpose through the PDF export backend.
Org-mode has an almost unimaginable number of use cases. (For example, this report has been written in org-mode.) While the main benefits of using org-mode for the coursework formatting was the interactivity of code execution, and the possibility of export, another benefit that appeared almost for free was minimal-overhead time-tracking (human performance profiling.) Although this initially appeared as a by-product of choosing a specific tool, the measures collected with the aid of org-mode is the main contribution of this report.
The way org-mode particulars were used is described in the next section, along with the statistical summary.
SICP’s problems can be roughly classified into the following classes:
Wonderfully absent are the problems of the data analysis kind.
This section will explain how these classes of problem can be solved in a “single document mode”.
Essays is the most straightforward case. The student can just write the answer to the question below the heading corresponding to a problem. Org-mode provides several minimal formatting capabilities that are enough to cover all the use cases required.
Mathematical problems require that a \TeX-system be present on the student machine, and employ org-mode’s ability to embed \TeX’ mathematics, along with previews, directly into the text. The author ended up conducting almost zero pen-and-paper calculations while doing SICP’s mathematical exercises.
Programming exercises in Scheme are mostly easily formatted as org-mode “babel-blocks”, with the output being pasted directly into the document body, and updated as needed.
Programming exercises in Scheme with input require a little bit of effort to make them work correctly. It is sometimes not entirely obvious whether the input should be interpreted as verbatim text, or as executable code. Ultimately, it turned out to be possible to format all the input data as either “example” or “code” blocks, feed them into the recipient blocks via an “:stdin’’ block directive and present all the test cases (different inputs) and test results (corresponding outputs) in the same document.
Programming exercises in a low-level language required wrapping the low-level language code into “babel” blocks, and the result of combining those into a “shell” block. This introduces an operating system dependency. However, GNU Unix Utilities are widespread enough to consider this not a limitation.
Programming exercises with graphical output turned out to be the trickiest part from the software suite perspective. Eventually, a Scheme-system (chibi) dependent wrapper around the ImageMagick graphics manipulation tool was written. Org-mode has a special syntax for the inclusion of graphic files, so the exercise solutions were generating the image files and pasting the image inclusion code into the org buffer.
Standard drawing exercises illustrate a problem that is extremely widespread, but seldom well understood, perhaps because people aiming to solve it usually do not come from the programming community. Indeed, there are several standard visual conventions for industrial illustrations and diagramming, including UML, ArchiMate, SDL, and various others. Wherever a SICP figure admitted a standard-based representation, the author tried to use that standard to express the answer to the problem. The PlantUML code-driven diagramming tool was used most often, as its support for UML proved to be superior to the alternatives. The org-plantuml bridge made it possible to solve these problems in the manner similar to the coding problems – as “org-babel” blocks.
Non-standard drawing exercises, the most prominent of those requiring drawing environment diagrams (debugging interfaces), were significantly more challenging. When a prepared mental model (i.e. an established diagramming standard) was absent, that diagram had to be implemented from scratch in an improvised way. The TikZ language proved to have enough features to cover the requirements of the book where PlantUML was not enough. It required much reading of the manual and an appropriate level of familiarity with \TeX.
This section deals with explaining exactly how the working process was organised and later shows some aggregated work measures that have been collected.
The execution was performed in the following way:
At the start of the work, the outline-tree corresponding to the book subsection tree was created. Most leaves are two-state TODO-headings. (Some outline leaves correspond to sections without problems, and thus are not TODO-styled.)
TODO-heading is a special type of an org-mode heading, that exports its state (TODO/DONE) to a simple database, which allows monitoring of the overall TODO/DONE ratio of the document.
Intermediate levels are not TODO-headings, but they contain the field representing the total ratio of DONE problems in a subtree.
The top-level ratio is the total number of finished problems divided by the total number of problems.
An example of the outline looks the following:
* SICP [385/404] ** Chapter 1: Building abstractions ... [57/61] *** DONE Exercise 1.1 Interpreter result CLOSED: [2019-08-20 Tue 14:23]... *** DONE Exercise 1.2 Prefix form CLOSED: [2019-08-20 Tue 14:25] #+begin_src scheme :exports both :results value (/ (+ 5 4 (- 2 (- 3 (+ 6 (/ 4 5))))) (* 3 (- 6 2) (- 2 7))) #+end_src #+RESULTS: : -37/150 ...
When work is clearly divided into parts and, for each unit, its completion status is self-evident, the visibility of completeness creates a sense of control in the student. The “degree of completeness of the whole project”, available at any moment, provides an important emotional experience of “getting close to the result with each completed exercise”.
Additional research is needed on how persistent this emotion is in students and how much it depends on the uneven distribution of difficulty or the total time consumption. There is, however, empirical evidence that even very imprecise, self-measured KPIs do positively affect the chance of reaching the goal. (See: [ 42])
From the author’s personal experience, uneven distribution of difficulties at the leaf-level tasks is a major demotivating factor. However, the real problems we find in daily life are not of consistent difficulty, and therefore managing an uneven distribution of difficulty is a critical meta-cognitive skill. Partitioning a large task into smaller ones (_not necessarily_ in the way suggested by the book) may be a way to tackle this problem. Traces of this approach are visible through the “reference” solution PDF.
The problems were executed almost sequentially. Work on the subsequent problem was started immediately after the previous problem had been finished.
Out of more than 350 exercises, only 13 were executed out of order (See section 3.2). Sequentiality of problems is essential for proper time accounting because the total time attributed to a problem is the sum of durations of all study sessions between the end of the problem considered and the end of the previous problem. It is not strictly required for the problem sequence to be identical to the sequence proposed by the book, but it is important that, if a problem is postponed, the study sessions corresponding to the initial attempt to solve this problem be somehow removed from the session log dataset.
In this report, study sessions corresponding to the initial attempts of solving out of order problems were simply ignored. This has not affected the overall duration measures much because those sessions were usually short.
Sequentiality is one of the weakest points of this report. It is generally hard to find motivation to work through a problem set sequentially. SICP does enforce sequentiality for a large share of problems by making the later problems depend on solutions of the previous ones, but this “dependence coverage” is not complete.
As the most straightforward workaround, the author may once again suggest dropping the initial attempts of solving the out-of-order problems from the data set entirely. This should be relatively easy to do because the student (arguably) is likely to decide whether to continue solving the problem or to postpone it within one study session. This study session may then be appropriately trimmed.
The author read the whole book before starting the project. The time to read the prose could also be included in project’s total time consumption, but the author decided against it. In fact, when approached from the viewpoint of completing the exercises, material given in the book appeared to have nothing in common with the perception created by only reading the text.
A deliberate effort was spent on avoiding closing a problem at the same time as closing the study session.
The reason for this is to exploit the well-known tricks (See: [ 3]):
The data come in two datasets, closely related.
Dataset 1: Exercise completion time was recorded using a standard org-mode closure time tracking mechanism. (See Appendix: Full data on the exercise completion times.) For every exercise, completion time was recorded as an org-mode time-stamp, with minute-scale precision.
Dataset 2: Study sessions were recorded in a separate org-mode file in the standard org-mode time interval standard (two time-stamps):
"BEGIN_TIME -- END_TIME".
(See Appendix: Full data on the study sessions.)
During each study session, the author tried to concentrate as much as possible, and to do only the activities related to the problem set. These are not limited to just writing the code and tuning the software setup. They include the whole “package” of activities leading to the declaration of the problem solved. These include, but are not limited to, reading or watching additional material, asking questions, fixing bugs in related software, and similar activities.
Several software problems were discovered in the process of making this solution. These problems were reported to the software authors. Several of those problems were fixed after a short time, thus allowing the author to continue with the solution. For a few of the problems, workarounds were found. None of the problems prevented full completion of the problem set.
The author found it very helpful to have a simple dependency resolution tool at his disposal. As has been mentioned above, SICP’s problems make heavy use of one another. It was therefore critical to find a way to re-use code within a single org-mode document. Indeed org’s WEB-like capabilities («noweb»-links) proved to be sufficient. Noweb-links is a method for verbatim inclusion of a code block into other code blocks. In particular, Exercise 5.48 required inclusion of 58 other code blocks into the final solution block. Pure copying would not suffice because SICP exercises often involve the evaluation of the code written before (in the previous exercises) by the code written during the execution of the current exercise. Therefore, later exercises are likely to expose errors in the earlier exercises’ solutions.
The following figure presents some of the aggregated measurements on solving of the problem set.
Thirteen problems were solved out-of-order. This means that those problems may have been the trickiest (although not necessarily the hardest.)
Exercise | Days Spent | Spans Sessions | Minutes Spent |
---|---|---|---|
Exercise 2.46 make-vect . |
2.578 | 5 | 535 |
Exercise 4.78 Non-deterministic queries. | 0.867 | 6 | 602 |
Exercise 3.28 Primitive or-gate. | 1.316 | 2 | 783 |
Exercise 4.79 Prolog environments. | 4.285 | 5 | 940 |
Exercise 3.9 Environment structures. | 21.030 | 10 | 1100 |
Exercise 4.77 Lazy queries. | 4.129 | 9 | 1214 |
Exercise 4.5 cond with arrow. |
12.765 | 7 | 1252 |
Exercise 5.52 Making a compiler for Scheme. | 22.975 | 13 | 2359 |
Exercise 2.92 Add, mul for different variables. | 4.556 | 11 | 2404 |
Exercise 5.51 EC-evaluator in low-level language. | 28.962 | 33 | 5684 |
It is hardly unexpected that writing a Scheme interpreter in a low-level language ( Exercise 5.51) turned out to be the most time-consuming problem of all the problem set. After all, it required learning an entirely new language from scratch. In the author’s case, the low-level language happened to be Fortran 2018. Learning Fortran up to the level required is a relatively straightforward, albeit time-consuming.
Exercise 5.52, a compiler for Scheme, implicitly required that the previous exercise be solved already, as the runtime support code is shared between these two problems. All of the compiled EC-evaluator turned out to be just a single (very long) Fortran function.
Exercise 2.29 proves that it is possible to create significantly difficult exercises even without introducing the concept of mutation into the curriculum. This problem bears the comment from the SICP authors, “This is not easy!”. Indeed, the final solution contained more than eight hundred lines of code, involved designing an expression normalisation algorithm from scratch, and required twenty-five unit tests to ensure consistency. It is just a huge task.
Exercise 4.5 is probably one of those exercises that would benefit most from a Teaching Assistant’s help. In fact, the exercise itself is not that hard. The considerable workload comes from the fact that, in order to test that the solution is correct, a fully working interpreter is required. Therefore, this exercise, in fact, includes reading the whole of Chapter 4 and assembling the interpreter. Furthermore, the solution involves a lot of list manipulation, which is itself inherently error-prone if using only the functions already provided by SICP.
Exercise 4.77 required heavy modification of the codebase that had already been accumulated. It is likely to be the most architecture-intensive exercise of the book, apart from the exercise requiring a full rewrite of the backtracking engine of Prolog in a non-deterministic evaluator ( Exercise 4.78). The code is very hard to implement incrementally, and the system is hardly testable until the last bit is finished. Furthermore, this exercise required the modification of the lowest-level data structures of the problem domain and modifying all the higher-level functions accordingly.
Exercise 4.79, is, in fact, an open-ended problem. The author considers it done, but the task is formulated so vaguely that it opens up an almost infinite range of possible solutions. This problem can hence consume any amount of time.
Exercise 3.9 required implementing a library for drawing environment diagrams. It may seem a trivial demand, as environment diagramming is an expected element of a decent debugger. However, the Scheme standard does not include many debugging capabilities. Debugging facilities differ among different Scheme implementation, but even those are usually not visual enough to generate the images required by the book. There exists an EnvDraw library (and its relatives), but the author failed to embed any of them into easily publishable Scheme code. It turned out to be more straightforward to implement drawing diagrams as TikZ pictures in embedded \LaTeX-blocks.
The time spent on Exercise 3.28 includes the assembly of the whole circuit simulation code into a working system. The time required actually to solve the problem was comparatively short.
The same can be said about Exercise 2.46, which required writing a bridge between a Scheme interpreter and a drawing system. The exercise itself is relatively easy.
To sum up this section, the most laborious exercises in the book are the ones that require a student to:
In total, the ten most challenging problems account for 280 hours of work which is more than a third of the full problem set workload.
This graph is probably the most representative of the whole problem set. As expected, the last few problems turned out to be among the hardest. The second part of the course turned out to be more time-consuming than the first one.
The figure depicts the number of days (Y-axis) a problem (enumerated by the X-axis coordinate) was loaded in the author’s brain. In simple words, it is the number of days that the state of “trying to solve a problem number X” spanned.
This measure is less justified than the “high concentration” time presented on the figure in the previous section. However, it may nevertheless be useful for encouraging students who get demotivated when spending a long “high concentration” session on a problem with no apparent success. Naturally, most (but not all) problems are solvable within one session (one day).
The second spike in the distribution can be attributed to general tiredness while solving such as huge problem set and a need for a break. The corresponding spike on the graph of the study sessions is less prominent.
A “session” may be defined as a period of high concentration when the student is actively trying to solve a problem and get the code (or essay) written. This graph presents the number of sessions (Y-axis) spent on each problem (enumerated by the X-axis), regardless of the session length.
When a student goes on a vacation, the problem, presumably, remains loaded in the student’s brain. However, periodic “assaults” in the form of study sessions may be necessary to feed the subconscious processing with the new data.
During vacation time, there should be a spike on the “days per problem” graph, but not the “sessions per problem graph”. This can be seen on the second spike in the “days per problem” graph, which has its counterpart on the “sessions per problem” graph. The counterpart is much shorter.
The linearly-scaled difficulty histogram depicts how many problems (Y-axis) require up to “bin size” hours for solution. Naturally, most of the exercises are solvable within one to three hours.
The logarithmically-scaled difficulty histogram depicts how many problems (Y-axis) require up to 2\textsuperscript{X} hours for solution. It is very interesting to observe that the histogram shape resembles a uni-modal distribution. It is hard to think of a theoretical foundation on which to base assumptions for the distribution law. Prior research, however, may imply that the distribution is log-normal. (See [ 10])
As follows immediately from the introduction, this report is essentially a single-point estimate of the difficulty distribution of a university-level problem set.
As far as the author knows, this is the first such a complete difficulty breakdown of a university-level problem set in existence.
As has been mentioned in section 3.2, the complete execution of the problem set required 729 hours. In simple words, this is a very long time. If a standard working day is assumed to have the length of 8 hours, the complete solution would require 91 days, or 14 weeks, or 3.5 months.
In the preface to the second edition, the authors claim that a redacted version (e.g. dropping the logical programming part, the part dedicated to the implementation of the register machine simulator, and most of the compiler-related sections) of the course can be covered in one semester. This statement is in agreement with the numbers presented in this report. Nevertheless, as the teachers would probably not want to assign every problem in the book to the student, they would need to make a selection based on both the coverage of the course topics and the time required. The author hopes that this report can provide an insight into the difficulty aspect.
On the other hand, the author would instead recommend opting for a two-semester course. If several of the hardest problems (i.e. problems discussed in section 3.3) are left out, the course can be fitted into two 300-hour modules. Three hundred hours per semester-long course matches the author’s experience of studying partial differential equations at the Moscow Institute of Physics and Technology.
Another important consideration is the amount of time that instructors require to verify solutions and to write feedback for the students. It is reasonable to assume that marking the solutions and writing feedback would require the same amount of time (within an order of magnitude) as the amount needed to solve the problem set, since every problem solution would have to be visited by a marker at least once. For simplicity, the author assumes that writing feedback would require 72 hours per student.
This parameter would then be multiplied by the expected number of students per group, which may vary between institutions, but can be lower-bounded by 5. Therefore the rough estimate would be \(\mbox{const} \cdot 72 \cdot 5 \approx 360\) hours, or 45 full working days (2 months). This duration is hardly practicable for a lone teacher, even if broken down over two semesters. (Each requiring 180 hours.) On the other hand, if the primary teacher is allowed to hire additional staff for marking, the problem becomes manageable again. One of the applications of this report may be as supporting evidence for lead instructors (professors) asking their school administration for teaching assistants.
The field of difficulty assessment (especially with the computer-based tools) of university courses still offers a lot to investigate. As far as the author of this report knows, this is the first exhaustive difficulty assessment of a university course. (This is not to say that SICP has not been successfully solved in full before. Various solutions can be found on many well-known software forges.)
The first natural direction of research would then be expanding the same effort towards other problem sets and other subjects.
On the other hand, this report is just a single point estimate, and therefore extremely biased. It may be a significant contribution if the same problem set (or indeed parts or even single problems of it) be solved by different people following the same protocol.
The provision of the solution protocol, the software setup and the time-tracking procedure, is deemed by the author to be a contribution of this report.
Professors teaching such a course are encouraged to show this report to their students and to suggest executing the problem set required along the lines of the protocol given here.
Another research direction could be towards finding an optimal curriculum design beyond the areas covered by SICP. It should not be unexpected if the students decide not to advance further in the course as long as their personal difficulty assessment exceeds a certain unknown threshold. In other words, the author suspects that, at some point, the students may feel an emotion that may be expressed as, “I have been solving this for too long, and see little progress; I should stop.”
It would be interesting to measure such a threshold and to suggest curriculum design strategies that aim to minimise course drop-out. Such strategies may include attempts at hooking into students’ intrinsic motivation (and proper measurements of the execution process may provide an insight on where it is hidden), as well as better designing an extrinsic motivation toolset (e.g. finding better KPIs for rewards and penalties, and proper measures should be helpful in this approach as well).
It would be interesting to observe whether the students who follow the protocol (and see their progress after each session) are more or less likely to drop the course than those who do not. This could constitute a test of intrinsic motivation in line with the self-determination theory of Deci and Ryan (see [ 32]).
Another important direction may be the development and formalisation of coursework submission formats, in order to facilitate further collection of similar data on this or other problem sets.
This section contains the author’s personal view on the problem set and the questions it raises.
The author (Vladimir Nikishkin), enjoyed doing it. On the other hand, it is hard to believe that teaching this course to first-year undergraduate students can easily be made successful. It is unlikely that a real-world student can dedicate seven hundred hours to a single subject, even if the subject is broken down into two semesters without significant support (the more so, recalling that 25 years has passed since the second edition was released, during which time the world of programming has expanded enormously.) Even if such a student is found, he would probably have other subjects in the semester, as well as the need to attend classes and demonstrations.
Admittedly, out of almost four hundred exercises, the author cannot find a single superfluous one. Even more, the author had to add some extra activities in order to cover several topics better. Every exercise teaches some valuable concept and nudges the student into thinking more deeply.
The course could have been improved in the area of garbage collection and other memory management topics.
Indeed, the main cons
-memory garbage collector is explained with sufficient detail to implement it, but several other parts of the interpreter memory model are left without explanation. Very little is said about efficiently storing numbers, strings and other objects.
There is not very much information about a rational process of software development. While this is not fundamental knowledge, but it would be helpful to undergraduates.
The last two exercises amount to one-fifth of the whole work. It was entirely unexpected to see a task to be completed in a language other than Scheme after having already finished most of the exercises.
Probably the biggest drawback of the book is the absence of any conclusion. Indeed, the book points the reader’s attention into various directions by means of an extensive bibliography. However, the author, as a willing student, would like to see a narrativised overview of the possible future directions.
If the author may, by virtue of personally experiencing this transformative experience, give a few suggestions to university curriculum designers, they would be the following:
This is often considered to be a meta-cognitive exercise to be solved by the students, but the author’s personal experience is not reassuring in this aspect. Very few students, and even professionals, use TeX efficiently. It took more than 50 hours just to refresh the skill of using \TeX{} that the author had already learnt, in order to write a thesis.
some
manual from the first page to the last), is a very enlightening experience, and additionally useful in teaching how to assess the time needed to grasp the skill of using a
piece of software. As a by-product, this experience may help the students to write better manuals for their own software.This section attempts to provide a complete list of materials used in the process of the problem set solution. It is not to be confused with the list of materials used in the preparation of this Experience Report.
[ 1] | Harold Abelson and Gerald J. Sussman. Structure and Interpretation of Computer Programs. MIT Press, 1 edition, 1985. [ bib ] |
[ 2] | Harold Abelson, Gerald J. Sussman, and Julia Sussman. Structure and Interpretation of Computer Programs. MIT Press, 2 edition, 1996. [ bib ] |
[ 3] | Dan L. Adler and Jacob S. Kounin. Some factors operating at the moment of resumption of interrupted tasks. 7(2):255--267. [ bib | DOI | http ] |
[ 4] | Isaac Balbin and Koenraad Lecot. Logic Programming. Springer Netherlands, 1985. [ bib | DOI | http ] |
[ 5] | Eric Berne. What Do You Say After You Say Hello? Bantam Books, New York, 1973. [ bib ] |
[ 6] | John A. Campbell, editor. Implementations of Prolog. Ellis Horwood/Halsted Press/Wiley, 1984. [ bib ] |
[ 7] | Taylor Campbell et al. MIT/GNU Scheme, 2019. [ bib | http ] |
[ 8] | Mats Carlsson. On implementing Prolog in functional programming. 2(4):347--359, 1984. [ bib | DOI | http ] |
[ 9] | Karen L. St. Clair. A case against compulsory class attendance policies in higher education. 23(3):171--180, 1999. [ bib | DOI | http ] |
[ 10] | Edwin L. Crow and Kunio Shimizu. Lognormal Distributions: Theory and Applications. Routledge, 5 2018. [ bib | DOI | http ] |
[ 11] | Carsten Dominik. The Org-Mode 7 Reference Manual: Organize Your Life with GNU Emacs. Network Theory, UK, 2010. with contributions by David O'Toole, Bastien Guerry, Philip Rooke, Dan Davison, Eric Schulte, and Thomas Dye. [ bib ] |
[ 12] | Carsten Dominik et al. Org-mode, 2019. [ bib | http ] |
[ 13] | John Ellson et al. Graphviz. [ bib | http ] |
[ 14] | Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi. The structure and interpretation of the computer science curriculum. 14:365--378, 07 2004. [ bib | DOI ] |
[ 15] | Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi. How to Design Programs: an Introduction to Programming and Computing. The MIT Press, Cambridge, Massachusetts, 2018. [ bib ] |
[ 16] | Free Software Foundation. GNU Emacs, 2019. [ bib | http ] |
[ 17] | Free Software Foundation. GNU debugger, 2020. [ bib | http ] |
[ 18] | Floyd W. Gembicki and Yacov Y. Haimes. Approach to performance and sensitivity multiobjective optimization: The goal attainment method. 20(6):769--771, December 1975. [ bib | DOI | http ] |
[ 19] | Martin Hlosta, Drahomira Herrmannova, Lucie Vachova, Jakub Kuzilek, Zdenek Zdrahal, and Annika Wolff. Modelling student online behaviour in a virtual learning environment. 2018. [ bib ] |
[ 20] | Eugene Kohlbecker. eu-Prolog, reference manual and report. Technical report, University of Indiana (Bloomington), Computer Science Department, 04 1984. [ bib ] |
[ 21] | E. W. Kooker. Changes in grade distributions associated with changes in class attendance policies. 13:56--57, 1976. [ bib ] |
[ 22] | Kelly Y. L. Ku and Irene T. Ho. Metacognitive strategies that enhance critical thinking. 5(3):251--267, July 2010. [ bib | DOI | http ] |
[ 23] | Douglas McGregor. Theory X and theory Y. 358:374, 1960. [ bib ] |
[ 24] | Michael Metcalf, John Reid, and Malcolm Cohen. Modern Fortran Explained. Oxford University Press, 10 2018. [ bib | DOI | http ] |
[ 25] | Vladimir Nikishkin. A full solution to the structure and interpretation of computer programs. [ bib | http ] |
[ 26] | Thomas Pender. UML Weekend Crash Course. Hungry Minds, Indianapolis, IN, 2002. [ bib ] |
[ 27] | PlantUML Developers. Drawing UML with plantuml. [ bib | http ] |
[ 28] | PlantUML Developers. PlantUML. [ bib | http ] |
[ 29] | Project Jupyter Developers. Jupyter Notebook: a server-client application that allows editing and running notebook documents via a web browser., 2019. [ bib | http ] |
[ 30] | Alexey Radul and Gerald J. Sussman. Revised report on the propagator model. [ bib | http ] |
[ 31] | Jose A. O. Ruiz et al. geiser, 2020. [ bib | .html ] |
[ 32] | Richard M Ryan and Edward L Deci. Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications, 2017. [ bib ] |
[ 33] | Alex Shinn. Chibi-Scheme. [ bib | http ] |
[ 34] | Alex Shinn, John Cowan, Arthur A. Gleckler, et al., editors. Revised 7 Report on the Algorithmic Language Scheme. 2013. [ bib | http ] |
[ 35] | Alex Shinn et al. Chibi-Scheme, 2019. [ bib | http ] |
[ 36] | Richard Stallman et al. Debugging with GDB, 2020. [ bib | .html ] |
[ 37] | Richard Stallman et al. GNU Emacs Lisp Reference Manual, 2020. [ bib | .pdf ] |
[ 38] | Richard Stallman et al. GNU Emacs Manual, 2020. [ bib | .pdf ] |
[ 39] | Till Tantau. The TikZ and PGF Packages. [ bib | .pdf ] |
[ 40] | Till Tantau et al. Portable graphics format. [ bib | http ] |
[ 41] | TeX User Groups. TeX Live, 2019. [ bib | http ] |
[ 42] | Jeffrey J. VanWormer, Simone A. French, Mark A. Pereira, and Ericka M. Welsh. The impact of regular self-weighing on weight management: A systematic literature review. 5(1):54, 2008. [ bib | DOI | http ] |
[ 43] | Patric Volkerding et al. Slackware Linux, 2019. [ bib | http ] |
For the code used to generate the tables in the following sections, see: Appendix: Emacs Lisp code for data analysis.
No | Exercise Name | Days Spent | Spans Sessions | Minutes Spent |
---|---|---|---|---|
1 | Exercise 1.1 Interpreter result | 1.211 | 2 | 459 |
2 | Exercise 1.2 Prefix form | 0.001 | 1 | 2 |
3 | Figure 1.1 Tree representation, showing the value of each su | 0.007 | 1 | 10 |
4 | Exercise 1.4 Compound expressions | 0.003 | 1 | 4 |
5 | Exercise 1.5 Ben’s test | 0.008 | 1 | 11 |
6 | Exercise 1.6 If is a special form | 0.969 | 2 | 118 |
7 | Exercise 1.7 Good enough? | 0.949 | 3 | 436 |
8 | Exercise 1.8 Newton’s method | 0.197 | 2 | 193 |
9 | Exercise 1.10 Ackermann’s function | 3.038 | 2 | 379 |
10 | Exercise 1.11 Recursive vs iterative | 0.037 | 1 | 54 |
11 | Exercise 1.12 Recursive Pascal’s triangle | 0.012 | 1 | 17 |
12 | Exercise 1.13 Fibonacci | 0.092 | 1 | 132 |
13 | Exercise 1.9 Iterative or recursive? | 3.722 | 2 | 65 |
14 | Exercise 1.14 count-change | 1.038 | 2 | 50 |
15 | Exercise 1.15 sine | 0.267 | 2 | 195 |
16 | Exercise 1.16 Iterative exponentiation | 0.032 | 1 | 46 |
17 | Exercise 1.17 Fast multiplication | 0.019 | 1 | 28 |
18 | Exercise 1.18 Iterative multiplication | 0.497 | 2 | 23 |
19 | Exercise 1.19 Logarithmic Fibonacci | 1.374 | 2 | 93 |
20 | Exercise 1.20 GCD applicative vs normal | 0.099 | 1 | 142 |
21 | Exercise 1.21 smallest-divisor | 0.027 | 1 | 39 |
22 | Exercise 1.22 timed-prime-test | 0.042 | 1 | 61 |
23 | Exercise 1.23 (next test-divisor) | 0.383 | 2 | 5 |
24 | Exercise 1.24 Fermat method | 0.067 | 1 | 96 |
25 | Exercise 1.25 expmod | 0.051 | 1 | 74 |
26 | Exercise 1.26 square vs mul | 0.003 | 1 | 4 |
27 | Exercise 1.27 Carmichael numbers | 0.333 | 2 | 102 |
28 | Exercise 1.28 Miller-Rabin | 0.110 | 1 | 158 |
29 | Exercise 1.29 Simpson’s integral | 0.464 | 2 | 68 |
30 | Exercise 1.30 Iterative sum | 0.030 | 2 | 10 |
31 | Exercise 1.31 Product | 0.028 | 1 | 40 |
32 | Exercise 1.32 Accumulator | 0.017 | 1 | 24 |
33 | Exercise 1.33 filtered-accumulate | 0.092 | 1 | 133 |
34 | Exercise 1.34 lambda | 0.006 | 1 | 8 |
35 | Exercise 1.35 fixed-point | 0.265 | 2 | 87 |
36 | Exercise 1.36 fixed-point-with-dampening | 0.035 | 1 | 50 |
37 | Exercise 1.37 cont-frac | 0.569 | 2 | 348 |
38 | Exercise 1.38 euler constant | 0.000 | 1 | 0 |
39 | Exercise 1.39 tan-cf | 0.025 | 1 | 36 |
40 | Exercise 1.40 newtons-method | 0.205 | 2 | 6 |
41 | Exercise 1.41 double-double | 0.010 | 1 | 15 |
42 | Exercise 1.42 compose | 0.004 | 1 | 6 |
43 | Exercise 1.43 repeated | 0.019 | 1 | 27 |
44 | Exercise 1.44 smoothing | 0.099 | 2 | 142 |
45 | Exercise 1.45 nth-root | 0.056 | 1 | 80 |
46 | Exercise 1.46 iterative-improve | 0.033 | 1 | 48 |
47 | Exercise 2.1 make-rat | 1.608 | 2 | 109 |
48 | Exercise 2.2 make-segment | 0.024 | 1 | 34 |
49 | Exercise 2.3 make-rectangle | 2.183 | 2 | 174 |
50 | Exercise 2.4 cons-lambda | 0.007 | 1 | 10 |
51 | Exercise 2.5 cons-pow | 0.041 | 1 | 59 |
52 | Exercise 2.6 Church Numerals | 0.024 | 1 | 34 |
53 | Exercise 2.7 make-interval | 0.019 | 1 | 28 |
54 | Exercise 2.8 sub-interval | 0.124 | 1 | 58 |
55 | Exercise 2.9 interval-width | 0.006 | 1 | 8 |
56 | Exercise 2.10 div-interval-better | 0.010 | 1 | 15 |
57 | Exercise 2.11 mul-interval-nine-cases | 0.052 | 1 | 75 |
58 | Exercise 2.12 make-center-percent | 0.393 | 2 | 43 |
59 | Exercise 2.13 formula for tolerance | 0.003 | 1 | 5 |
60 | Exercise 2.14 parallel-resistors | 0.047 | 1 | 68 |
61 | Exercise 2.15 better-intervals | 0.007 | 1 | 10 |
62 | Exercise 2.16 interval-arithmetic | 0.002 | 1 | 3 |
63 | Exercise 2.17 last-pair | 0.966 | 2 | 89 |
64 | Exercise 2.18 reverse | 0.006 | 1 | 9 |
65 | Exercise 2.19 coin-values | 0.021 | 1 | 30 |
66 | Exercise 2.20 dotted-tail notation | 0.311 | 2 | 156 |
67 | Exercise 2.21 map-square-list | 0.013 | 1 | 19 |
68 | Exercise 2.22 wrong list order | 0.007 | 1 | 10 |
69 | Exercise 2.23 for-each | 0.006 | 1 | 9 |
70 | Exercise 2.24 list-plot-result | 0.111 | 2 | 75 |
71 | Exercise 2.25 caddr | 0.037 | 1 | 54 |
72 | Exercise 2.26 append cons list | 0.011 | 1 | 16 |
73 | Exercise 2.27 deep-reverse | 0.433 | 2 | 40 |
74 | Exercise 2.28 fringe | 0.026 | 1 | 37 |
75 | Exercise 2.29 mobile | 0.058 | 1 | 83 |
76 | Exercise 2.30 square-tree | 0.100 | 2 | 122 |
77 | Exercise 2.31 tree-map square tree | 0.019 | 1 | 27 |
78 | Exercise 2.32 subsets | 0.010 | 1 | 15 |
79 | Exercise 2.33 map-append-length | 0.375 | 2 | 96 |
80 | Exercise 2.34 horners-rule | 0.006 | 1 | 8 |
81 | Exercise 2.35 count-leaves-accumulate | 0.011 | 1 | 16 |
82 | Exercise 2.36 accumulate-n | 0.006 | 1 | 9 |
83 | Exercise 2.37 matrix-*-vector | 0.017 | 1 | 24 |
84 | Exercise 2.38 fold-left | 0.372 | 2 | 65 |
85 | Exercise 2.39 reverse fold-right fold-left | 0.005 | 1 | 7 |
86 | Exercise 2.40 unique-pairs | 0.029 | 1 | 42 |
87 | Exercise 2.41 triple-sum | 2.195 | 2 | 57 |
88 | Figure 2.8 A solution to the eight-queens puzzle. | 0.001 | 1 | 2 |
89 | Exercise 2.42 k-queens | 3.299 | 2 | 122 |
90 | Exercise 2.43 slow k-queens | 0.019 | 1 | 28 |
91 | Exercise 2.46 make-vect | 2.578 | 5 | 535 |
92 | Exercise 2.47 make-frame | 0.083 | 1 | 10 |
93 | Exercise 2.48 make-segment | 0.054 | 1 | 78 |
94 | Exercise 2.49 segments->painter applications | 0.294 | 2 | 139 |
95 | Exercise 2.50 flip-horiz and rotate270 and rotate180 | 0.019 | 1 | 27 |
96 | Exercise 2.51 below | 1.801 | 4 | 524 |
97 | Exercise 2.44 up-split | 1.169 | 2 | 89 |
98 | Exercise 2.45 split | 0.113 | 2 | 23 |
99 | Exercise 2.52 modify square-limit | 0.450 | 2 | 58 |
100 | Exercise 2.53 quote introduction | 0.008 | 1 | 11 |
101 | Exercise 2.54 equal? implementation | 0.050 | 1 | 72 |
102 | Exercise 2.55 quote quote | 0.000 | 1 | 0 |
103 | Exercise 2.56 differentiation-exponentiation | 0.393 | 2 | 65 |
104 | Exercise 2.57 differentiate-three-sum | 0.560 | 3 | 147 |
105 | Exercise 2.58 infix-notation | 0.112 | 1 | 161 |
106 | Exercise 2.59 union-set | 0.277 | 2 | 6 |
107 | Exercise 2.60 duplicate-set | 0.012 | 1 | 17 |
108 | Exercise 2.62 ordered-union-set (ordered list) | 0.973 | 2 | 14 |
109 | Exercise 2.61 sets as ordered lists | 0.004 | 1 | 6 |
110 | Exercise 2.63 tree->list (binary search tree) | 0.078 | 1 | 113 |
111 | Exercise 2.64 balanced-tree | 2.740 | 3 | 106 |
112 | Exercise 2.65 tree-union-set | 9.785 | 2 | 47 |
113 | Exercise 2.66 tree-lookup | 0.035 | 1 | 50 |
114 | Exercise 2.67 Huffman decode a simple message | 0.303 | 3 | 108 |
115 | Exercise 2.68 Huffman encode a simple message | 0.023 | 1 | 33 |
116 | Exercise 2.69 Generate Huffman tree | 0.608 | 2 | 160 |
117 | Exercise 2.70 Generate a tree and encode a song | 0.072 | 2 | 57 |
118 | Exercise 2.71 Huffman tree for frequencies 5 and 10 | 0.258 | 2 | 202 |
119 | Exercise 2.72 Huffman order of growth | 0.050 | 2 | 26 |
120 | Exercise 2.73 data-driven-deriv | 0.605 | 2 | 189 |
121 | Exercise 2.74 Insatiable Enterprises | 0.410 | 4 | 171 |
122 | Exercise 2.75 make-from-mag-ang message passing | 0.019 | 1 | 28 |
123 | Exercise 2.76 types or functions? | 0.003 | 1 | 5 |
124 | Exercise 2.77 generic-algebra-magnitude | 0.772 | 3 | 190 |
125 | Exercise 2.78 Ordinary numbers for Scheme | 0.212 | 2 | 67 |
126 | Exercise 2.79 generic-equality | 1.786 | 2 | 28 |
127 | Exercise 2.80 Generic arithmetic zero? | 0.056 | 1 | 80 |
128 | Exercise 2.81 coercion to-itself | 0.749 | 3 | 330 |
129 | Exercise 2.82 three-argument-coercion | 0.433 | 2 | 230 |
130 | Exercise 2.83 Numeric Tower and (raise) | 0.717 | 3 | 116 |
131 | Exercise 2.84 Using raise ( raise-type ) in apply-generic |
0.865 | 2 | 135 |
132 | Exercise 2.85 Dropping a type | 3.089 | 5 | 507 |
133 | Exercise 2.86 Compound complex numbers | 0.274 | 2 | 108 |
134 | Exercise 2.87 Generalized zero? | 0.919 | 4 | 389 |
135 | Exercise 2.88 Subtraction of polynomials | 0.646 | 3 | 50 |
136 | Exercise 2.89 Dense term-lists | 0.083 | 1 | 120 |
137 | Exercise 2.90 Implementing dense polynomials as a separate p | 0.400 | 2 | 148 |
138 | Exercise 2.91 Division of polynomials | 0.111 | 2 | 103 |
139 | Exercise 2.92 Ordering of variables so that addition and mul | 4.556 | 11 | 964 |
140 | Exercise 2.93 Rational polynomials | 0.378 | 3 | 198 |
141 | Exercise 2.94 Greatest-common-divisor for polynomials | 0.091 | 1 | 131 |
142 | Exercise 2.95 Illustrate the non-integer problem | 0.450 | 2 | 149 |
143 | Exercise 2.96 Integerizing factor | 0.325 | 2 | 275 |
144 | Exercise 2.97 Reduction of polynomials | 0.201 | 1 | 140 |
145 | Exercise 3.1 accumulators | 0.425 | 2 | 53 |
146 | Exercise 3.2 make-monitored | 0.027 | 1 | 39 |
147 | Exercise 3.3 password protection | 0.010 | 1 | 14 |
148 | Exercise 3.4 call-the-cops | 0.010 | 1 | 15 |
149 | Exercise 3.5 Monte-Carlo | 0.528 | 2 | 98 |
150 | Exercise 3.6 reset a prng | 0.479 | 2 | 68 |
151 | Exercise 3.7 Joint accounts | 0.059 | 1 | 85 |
152 | Exercise 3.8 Right-to-left vs Left-to-right | 0.026 | 1 | 38 |
153 | Exercise 3.9 Environment structures | 21.030 | 10 | 1100 |
154 | Exercise 3.10 Using let to create state variables |
4.933 | 2 | 138 |
155 | Exercise 3.11 Internal definitions | 0.994 | 2 | 219 |
156 | Exercise 3.12 Drawing append! |
2.966 | 3 | 347 |
157 | Exercise 3.13 make-cycle |
0.010 | 1 | 14 |
158 | Exercise 3.14 mystery |
0.385 | 2 | 77 |
159 | Exercise 3.15 set-to-wow! |
1.942 | 3 | 117 |
160 | Exercise 3.16 count-pairs |
0.171 | 1 | 118 |
161 | Exercise 3.17 Real count-pairs |
0.029 | 1 | 42 |
162 | Exercise 3.18 Finding cycles | 0.012 | 1 | 17 |
163 | Exercise 3.19 Efficient finding cycles | 0.934 | 2 | 205 |
164 | Exercise 3.20 Procedural set-car! |
0.633 | 2 | 121 |
165 | Exercise 3.21 queues | 0.021 | 1 | 30 |
166 | Exercise 3.22 procedural queue | 0.294 | 2 | 67 |
167 | Exercise 3.23 dequeue | 0.049 | 2 | 71 |
168 | Exercise 3.24 tolerant tables | 0.780 | 3 | 33 |
169 | Exercise 3.25 multilevel tables | 2.103 | 2 | 486 |
170 | Exercise 3.26 binary tree table | 0.013 | 1 | 18 |
171 | Exercise 3.27 memoization | 0.802 | 2 | 2 |
172 | Exercise 3.28 primitive or-gate | 1.316 | 2 | 783 |
173 | Exercise 3.29 Compound or-gate | 0.001 | 1 | 2 |
174 | Exercise 3.30 ripple-carry adder | 0.009 | 1 | 13 |
175 | Exercise 3.31 Initial propagation | 0.013 | 1 | 18 |
176 | Exercise 3.32 Order matters | 0.007 | 1 | 10 |
177 | Exercise 3.33 averager constraint | 9.460 | 3 | 198 |
178 | Exercise 3.34 Wrong squarer | 0.042 | 1 | 61 |
179 | Exercise 3.35 Correct squarer | 0.012 | 1 | 17 |
180 | Exercise 3.36 Connector environment diagram | 3.319 | 3 | 263 |
181 | Exercise 3.37 Expression-based constraints | 0.037 | 1 | 53 |
182 | Exercise 3.38 Timing | 0.061 | 1 | 88 |
183 | Exercise 3.39 Serializer | 1.266 | 4 | 269 |
184 | Exercise 3.40 Three parallel multiplications | 5.973 | 3 | 332 |
185 | Exercise 3.41 Better protected account | 4.229 | 2 | 97 |
186 | Exercise 3.42 Saving on serializers | 0.023 | 1 | 33 |
187 | Exercise 3.43 Multiple serializations | 0.040 | 1 | 58 |
188 | Exercise 3.44 Transfer money | 0.005 | 1 | 7 |
189 | Exercise 3.45 new plus old serializers | 0.004 | 1 | 6 |
190 | Exercise 3.46 broken test-and-set! | 0.007 | 1 | 10 |
191 | Exercise 3.47 semaphores | 1.044 | 2 | 53 |
192 | Exercise 3.48 serialized-exchange deadlock | 0.022 | 1 | 31 |
193 | Exercise 3.49 When numbering accounts doesn’t work | 0.008 | 1 | 11 |
194 | Exercise 3.50 stream-map multiple arguments | 0.317 | 3 | 96 |
195 | Exercise 3.51 stream-show | 0.007 | 1 | 10 |
196 | Exercise 3.52 streams with mind-boggling | 0.034 | 1 | 49 |
197 | Exercise 3.53 stream power of two | 0.016 | 1 | 23 |
198 | Exercise 3.54 mul-streams | 0.005 | 1 | 7 |
199 | Exercise 3.55 streams partial-sums | 0.013 | 1 | 18 |
200 | Exercise 3.56 Hamming’s streams-merge | 0.015 | 1 | 21 |
201 | Exercise 3.57 exponential additions fibs | 0.007 | 1 | 10 |
202 | Exercise 3.58 Cryptic stream | 0.010 | 1 | 14 |
203 | Exercise 3.59 power series | 0.422 | 2 | 30 |
204 | Exercise 3.60 mul-series | 0.048 | 1 | 69 |
205 | Exercise 3.61 power-series-inversion | 0.087 | 1 | 126 |
206 | Exercise 3.62 div-series | 0.006 | 1 | 8 |
207 | Exercise 3.63 sqrt-stream | 0.299 | 2 | 8 |
208 | Exercise 3.64 stream-limit | 1.546 | 2 | 55 |
209 | Exercise 3.65 approximating logarithm | 0.039 | 1 | 56 |
210 | Exercise 3.66 lazy pairs | 0.515 | 2 | 107 |
211 | Exercise 3.67 all possible pairs | 0.010 | 1 | 14 |
212 | Exercise 3.68 pairs-louis | 0.012 | 1 | 17 |
213 | Exercise 3.70 merge-weighted | 0.522 | 2 | 188 |
214 | Exercise 3.71 Ramanujan numbers | 0.035 | 1 | 51 |
215 | Exercise 3.72 Ramanujan 3-numbers | 0.901 | 2 | 187 |
216 | Figure 3.32 | 0.022 | 1 | 32 |
217 | Exercise 3.73 RC-circuit | 0.090 | 1 | 130 |
218 | Exercise 3.74 zero-crossings | 0.153 | 1 | 221 |
219 | Exercise 3.75 filtering signals | 0.056 | 1 | 81 |
220 | Exercise 3.76 stream-smooth | 0.073 | 2 | 36 |
221 | Exercise 3.77 | 0.038 | 1 | 55 |
222 | Exercise 3.78 second order differential equation | 0.039 | 1 | 56 |
223 | Exercise 3.79 general second-order ode | 0.007 | 1 | 10 |
224 | Figure 3.36 | 0.058 | 1 | 84 |
225 | Exercise 3.80 RLC circuit | 0.013 | 1 | 19 |
226 | Exercise 3.81 renerator-in-streams | 0.040 | 1 | 57 |
227 | Exercise 3.82 streams Monte-Carlo | 0.378 | 2 | 57 |
228 | Exercise 4.1 list-of-values ordered | 0.437 | 2 | 14 |
229 | Exercise 4.2 application before assignments | 0.021 | 1 | 30 |
230 | Exercise 4.3 data-directed eval | 0.030 | 1 | 43 |
231 | Exercise 4.4 eval-and and eval-or | 0.035 | 1 | 50 |
232 | Exercise 4.5 cond with arrow | 12.765 | 7 | 1252 |
233 | Exercise 4.6 Implementing let | 0.019 | 1 | 27 |
234 | Exercise 4.7 Implementing let* | 0.046 | 1 | 66 |
235 | Exercise 4.8 Implementing named let | 0.070 | 1 | 101 |
236 | Exercise 4.9 Implementing until | 0.928 | 3 | 102 |
237 | Exercise 4.10 Modifying syntax | 14.168 | 3 | 462 |
238 | Exercise 4.11 Environment as a list of bindings | 4.368 | 2 | 194 |
239 | Exercise 4.12 Better abstractions for setting a value | 0.529 | 2 | 120 |
240 | Exercise 4.13 Implementing make-unbound! |
0.550 | 2 | 149 |
241 | Exercise 4.14 meta map versus built-in map | 0.004 | 1 | 6 |
242 | Exercise 4.15 The halts? predicate |
0.018 | 1 | 26 |
243 | Exercise 4.16 Simultaneous internal definitions | 0.162 | 2 | 177 |
244 | Exercise 4.17 Environment with simultaneous definitions | 0.036 | 1 | 52 |
245 | Exercise 4.18 Alternative scanning | 0.018 | 1 | 26 |
246 | Exercise 4.19 Mutual simultaneous definitions | 0.220 | 2 | 96 |
247 | Exercise 4.20 letrec | 0.206 | 2 | 195 |
248 | Exercise 4.21 Y-combinator | 0.013 | 1 | 18 |
249 | Exercise 4.22 Extending evaluator to support let |
1.768 | 3 | 144 |
250 | Exercise 4.23 Analysing sequences | 0.005 | 1 | 7 |
251 | Exercise 4.24 Analysis time test | 0.022 | 1 | 32 |
252 | Exercise 4.25 lazy factorial | 0.034 | 1 | 49 |
253 | Exercise 4.26 unless as a special form | 0.313 | 1 | 451 |
254 | Exercise 4.27 Working with mutation in lazy interpreters | 0.515 | 2 | 112 |
255 | Exercise 4.28 Eval before applying | 0.005 | 1 | 7 |
256 | Exercise 4.29 Lazy evaluation is slow without memoization | 0.035 | 1 | 50 |
257 | Exercise 4.30 Lazy sequences | 0.153 | 2 | 74 |
258 | Exercise 4.31 Lazy arguments with syntax extension | 0.092 | 2 | 112 |
259 | Exercise 4.32 streams versus lazy lists | 0.503 | 2 | 87 |
260 | Exercise 4.33 quoted lazy lists | 0.097 | 2 | 103 |
261 | Exercise 4.34 printing lazy lists | 0.219 | 3 | 205 |
262 | Exercise 4.50 The ramb operator |
0.813 | 4 | 266 |
263 | Exercise 4.35 an-integer-between and Pythagorean triples |
0.103 | 2 | 138 |
264 | Exercise 3.69 triples | 0.115 | 2 | 85 |
265 | Exercise 4.36 infinite search for Pythagorean triples | 0.011 | 1 | 16 |
266 | Exercise 4.37 another method for triples | 0.035 | 1 | 51 |
267 | Exercise 4.38 Logical puzzle - Not same floor | 0.027 | 1 | 39 |
268 | Exercise 4.39 Order of restrictions | 0.003 | 1 | 5 |
269 | Exercise 4.40 People to floor assignment | 0.019 | 1 | 28 |
270 | Exercise 4.41 Ordinary Scheme to solve the problem | 0.072 | 1 | 103 |
271 | Exercise 4.42 The liars puzzle | 0.503 | 1 | 81 |
272 | Exercise 4.43 Problematical Recreations | 0.052 | 1 | 75 |
273 | Exercise 4.44 Nondeterministic eight queens | 0.074 | 1 | 106 |
274 | Exercise 4.45 Five parses | 0.186 | 3 | 145 |
275 | Exercise 4.46 Order of parsing | 0.007 | 1 | 10 |
276 | Exercise 4.47 Parse verb phrase by Louis | 0.013 | 1 | 18 |
277 | Exercise 4.48 Extending the grammar | 0.037 | 1 | 1 |
278 | Exercise 4.49 Alyssa’s generator | 0.031 | 1 | 45 |
279 | Exercise 4.51 Implementing permanent-set! |
0.030 | 1 | 43 |
280 | Exercise 4.52 if-fail |
0.063 | 1 | 91 |
281 | Exercise 4.53 test evaluation | 0.005 | 1 | 7 |
282 | Exercise 4.54 analyze-require |
0.468 | 2 | 31 |
283 | Exercise 4.55 Simple queries | 0.258 | 2 | 372 |
284 | Exercise 4.56 Compound queries | 0.018 | 1 | 26 |
285 | Exercise 4.57 custom rules | 0.147 | 3 | 112 |
286 | Exercise 4.58 big shot | 0.025 | 1 | 36 |
287 | Exercise 4.59 meetings | 0.031 | 1 | 45 |
288 | Exercise 4.60 pairs live near | 0.016 | 1 | 23 |
289 | Exercise 4.61 next-to relation | 0.008 | 1 | 11 |
290 | Exercise 4.62 last-pair | 0.033 | 1 | 48 |
291 | Exercise 4.63 Genesis | 0.423 | 2 | 40 |
292 | Figure 4.6 How the system works | 0.022 | 1 | 31 |
293 | Exercise 4.64 broken outranked-by | 0.065 | 1 | 94 |
294 | Exercise 4.65 second-degree subordinates | 0.012 | 1 | 17 |
295 | Exercise 4.66 Ben’s accumulation | 0.013 | 1 | 18 |
296 | Exercise 4.70 Cons-stream delays its second argument | 0.167 | 3 | 79 |
297 | Exercise 4.72 interleave-stream | 0.002 | 1 | 3 |
298 | Exercise 4.73 flatten-stream delays | 0.006 | 1 | 8 |
299 | Exercise 4.67 loop detector | 0.251 | 1 | 361 |
300 | Exercise 4.68 reverse rule | 0.686 | 2 | 321 |
301 | Exercise 4.69 great grandchildren | 0.080 | 2 | 65 |
302 | Exercise 4.71 Louis’ simple queries | 0.134 | 2 | 69 |
303 | Exercise 4.74 Alyssa’s streams | 0.044 | 1 | 64 |
304 | Exercise 4.75 unique special form |
0.055 | 1 | 79 |
305 | Exercise 4.76 improving and |
0.797 | 2 | 438 |
306 | Figure 5.2 Controller for a GCD Machine | 0.167 | 3 | 124 |
307 | Exercise 5.1 Register machine plot | 0.020 | 1 | 29 |
308 | Figure 5.1 Data paths for a Register Machine | 0.599 | 2 | 115 |
309 | Exercise 5.2 Register machine language description of Exerci | 0.006 | 1 | 8 |
310 | Exercise 5.3 Machine for sqrt using Newton Method |
0.306 | 2 | 286 |
311 | Exercise 5.4 Recursive register machines | 1.001 | 4 | 274 |
312 | Exercise 5.5 Hand simulation for factorial and Fibonacci | 0.110 | 1 | 158 |
313 | Exercise 5.6 Fibonacci machine extra instructions | 0.011 | 1 | 16 |
314 | Exercise 5.7 Test the 5.4 machine on a simulator | 0.458 | 2 | 133 |
315 | Exercise 5.8 Ambiguous labels | 0.469 | 1 | 160 |
316 | Exercise 5.9 Prohibit (op)s on labels | 0.017 | 1 | 25 |
317 | Exercise 5.10 Changing syntax | 0.011 | 1 | 16 |
318 | Exercise 5.11 Save and restore | 0.619 | 3 | 186 |
319 | Exercise 5.12 Data paths from controller | 0.424 | 2 | 183 |
320 | Exercise 5.13 Registers from controller | 0.470 | 2 | 101 |
321 | Exercise 1.3 Sum of squares | 1.044 | 1 | 6 |
322 | Exercise 5.14 Profiling | 0.347 | 2 | 57 |
323 | Exercise 5.15 Instruction counting | 0.052 | 1 | 75 |
324 | Exercise 5.16 Tracing execution | 0.058 | 1 | 83 |
325 | Exercise 5.18 Register tracing | 0.631 | 2 | 90 |
326 | Exercise 5.19 Breakpoints | 0.149 | 1 | 215 |
327 | Exercise 5.17 Printing labels | 0.001 | 1 | 1 |
328 | Exercise 5.20 Drawing a list “ (#1=(1 . 2) #1) ” |
0.189 | 2 | 139 |
329 | Exercise 5.21 Register machines for list operations | 0.617 | 2 | 115 |
330 | Exercise 5.22 append and append! as register machines |
0.047 | 1 | 68 |
331 | Exercise 5.23 Extending EC-evaluator with let and cond |
0.862 | 4 | 363 |
332 | Exercise 5.24 Making cond a primitive |
0.160 | 2 | 199 |
333 | Exercise 5.25 Normal-order (lazy) evaluation | 1.010 | 4 | 342 |
334 | Exercise 5.26 Explore tail recursion with factorial |
0.195 | 2 | 26 |
335 | Exercise 5.27 Stack depth for a recursive factorial | 0.008 | 1 | 11 |
336 | Exercise 5.28 Interpreters without tail recursion | 0.028 | 1 | 40 |
337 | Exercise 5.29 Stack in tree-recursive Fibonacci | 0.015 | 1 | 21 |
338 | Exercise 5.30 Errors | 0.615 | 3 | 147 |
339 | Exercise 5.31 a preserving mechanism |
0.417 | 2 | 161 |
340 | Exercise 5.32 symbol-lookup optimization | 0.052 | 1 | 75 |
341 | Exercise 5.33 compiling factorial-alt |
0.753 | 2 | 267 |
342 | Exercise 5.34 compiling iterative factorial | 0.169 | 1 | 243 |
343 | Exercise 5.35 Decompilation | 0.022 | 1 | 32 |
344 | Exercise 5.36 Order of evaluation | 0.845 | 4 | 256 |
345 | Exercise 5.37 preserving |
0.135 | 1 | 194 |
346 | Exercise 5.38 open code primitives | 0.914 | 3 | 378 |
347 | Exercise 5.41 find-variable |
0.028 | 1 | 40 |
348 | Exercise 5.39 lexical-address-lookup |
0.044 | 1 | 64 |
349 | Exercise 5.42 Rewrite compile-variable and ~compile-assign |
0.679 | 2 | 118 |
350 | Exercise 5.40 maintaining a compile-time environment | 0.085 | 2 | 101 |
351 | Exercise 5.43 Scanning out defines | 0.249 | 3 | 261 |
352 | Exercise 5.44 open code with compile-time environment | 0.020 | 1 | 29 |
353 | Exercise 5.45 stack usage analysis for a factorial |
0.528 | 1 | 61 |
354 | Exercise 5.46 stack usage analysis for fibonacci |
0.017 | 1 | 25 |
355 | Exercise 5.47 calling interpreted procedures | 0.049 | 1 | 71 |
356 | Exercise 5.48 compile-and-run |
1.020 | 3 | 264 |
357 | Exercise 5.49 read-compile-execute-print loop |
0.015 | 1 | 22 |
358 | Exercise 4.77 lazy queries | 4.129 | 9 | 1214 |
359 | Exercise 5.50 Compiling the metacircular evaluator | 0.007 | 1 | 10 |
360 | Exercise 4.78 non-deterministic queries | 0.867 | 6 | 602 |
361 | Exercise 5.51 Translating the EC-evaluator into a low-level | 28.962 | 33 | 5684 |
362 | Exercise 5.52 Making a compiler for Scheme | 22.975 | 13 | 2359 |
363 | Exercise 4.79 prolog environments | 4.285 | 5 | 940 |
Bin Lower Bound (Minutes) | N. tasks |
---|---|
0. | 301 |
177.625 | 38 |
355.25 | 14 |
532.875 | 2 |
710.5 | 1 |
888.125 | 2 |
1065.75 | 2 |
1243.375 | 1 |
1421. | 0 |
1598.625 | 0 |
1776.25 | 0 |
1953.875 | 0 |
2131.5 | 0 |
2309.125 | 1 |
2486.75 | 0 |
2664.375 | 0 |
2842. | 0 |
3019.625 | 0 |
3197.25 | 0 |
3374.875 | 0 |
3552.5 | 0 |
3730.125 | 0 |
3907.75 | 0 |
4085.375 | 0 |
4263. | 0 |
4440.625 | 0 |
4618.25 | 0 |
4795.875 | 0 |
4973.5 | 0 |
5151.125 | 1 |
Bin Lower Bound (Minutes) | N. tasks |
---|---|
1 | 2 |
2 | 6 |
4 | 15 |
8 | 41 |
16 | 55 |
32 | 67 |
64 | 85 |
128 | 52 |
256 | 29 |
512 | 6 |
1024 | 3 |
2048 | 1 |
4096 | 1 |
This section lists the data on each study session in the
“BEGIN_TIMESTAMP-END_TIMESTAMP:duration”
format.
The earliest time stamp also marks the beginning of the whole project.
[2020-05-10 Sun 14:39]-[2020-05-10 Sun 18:00]|3:21 [2020-05-09 Sat 19:13]-[2020-05-09 Sat 22:13]|3:00 [2020-05-09 Sat 09:34]-[2020-05-09 Sat 14:34]|5:00 [2020-05-08 Fri 21:45]-[2020-05-08 Fri 23:17]|1:32 [2020-05-08 Fri 18:30]-[2020-05-08 Fri 21:18]|2:48 [2020-05-06 Wed 10:12]-[2020-05-06 Wed 11:09]|0:57 [2020-05-05 Tue 12:11]-[2020-05-06 Wed 00:00]|11:49 [2020-05-04 Mon 18:20]-[2020-05-05 Tue 00:30]|6:10 [2020-05-04 Mon 14:02]-[2020-05-04 Mon 17:43]|3:41 [2020-05-03 Sun 21:03]-[2020-05-03 Sun 22:02]|0:59 [2020-04-30 Thu 09:28]-[2020-04-30 Thu 11:23]|1:55 [2020-04-29 Wed 20:00]-[2020-04-29 Wed 23:25]|3:25 [2020-04-28 Tue 22:55]-[2020-04-29 Wed 00:11]|1:16 [2020-04-28 Tue 21:00]-[2020-04-28 Tue 22:50]|1:50 [2020-04-27 Mon 20:09]-[2020-04-27 Mon 22:09]|2:00 [2020-04-26 Sun 20:10]-[2020-04-26 Sun 23:52]|3:42 [2020-04-21 Tue 11:01]-[2020-04-21 Tue 12:26]|1:25 [2020-04-13 Mon 11:40]-[2020-04-13 Mon 11:55]|0:15 [2020-04-11 Sat 11:50]-[2020-04-11 Sat 15:50]|4:00 [2020-04-10 Fri 09:50]-[2020-04-10 Fri 14:26]|4:36 [2020-04-09 Thu 19:50]-[2020-04-09 Thu 23:10]|3:20 [2020-04-09 Thu 09:55]-[2020-04-09 Thu 13:00]|3:05 [2020-04-08 Wed 22:50]-[2020-04-08 Wed 23:55]|1:05 [2020-04-08 Wed 18:30]-[2020-04-08 Wed 21:11]|2:41 [2020-04-08 Wed 09:15]-[2020-04-08 Wed 12:15]|3:00 [2020-04-07 Tue 20:46]-[2020-04-07 Tue 23:37]|2:51 [2020-04-07 Tue 09:41]-[2020-04-07 Tue 11:57]|2:16 [2020-04-06 Mon 18:58]-[2020-04-06 Mon 21:20]|2:22 [2020-04-06 Mon 12:09]-[2020-04-06 Mon 14:15]|2:06 [2020-04-05 Sun 11:30]-[2020-04-05 Sun 15:11]|3:41 [2020-04-04 Sat 22:08]-[2020-04-04 Sat 22:45]|0:37 [2020-04-04 Sat 17:54]-[2020-04-04 Sat 20:50]|2:56 [2020-04-04 Sat 17:24]-[2020-04-04 Sat 17:41]|0:17 [2020-04-04 Sat 15:15]-[2020-04-04 Sat 16:10]|0:55 [2020-04-03 Fri 20:22]-[2020-04-03 Fri 22:21]|1:59 [2020-04-01 Wed 13:05]-[2020-04-01 Wed 15:05]|2:00 [2020-03-29 Sun 13:05]-[2020-03-29 Sun 22:05]|9:00 [2020-03-28 Sat 13:04]-[2020-03-28 Sat 22:04]|9:00 [2020-03-26 Thu 20:20]-[2020-03-26 Thu 23:33]|3:13 [2020-03-26 Thu 10:43]-[2020-03-26 Thu 14:39]|3:56 [2020-03-24 Tue 20:00]-[2020-03-24 Tue 23:50]|3:50 [2020-03-24 Tue 09:10]-[2020-03-24 Tue 12:34]|3:24 [2020-03-23 Mon 19:56]-[2020-03-23 Mon 23:06]|3:10 [2020-03-23 Mon 10:23]-[2020-03-23 Mon 13:23]|3:00 [2020-03-23 Mon 09:06]-[2020-03-23 Mon 10:56]|1:50 [2020-03-22 Sun 18:46]-[2020-03-22 Sun 22:45]|3:59 [2020-03-22 Sun 12:45]-[2020-03-22 Sun 13:46]|1:01 [2020-03-21 Sat 19:07]-[2020-03-21 Sat 21:35]|2:28 [2020-03-17 Tue 19:11]-[2020-03-17 Tue 22:11]|3:00 [2020-03-15 Sun 09:10]-[2020-03-15 Sun 12:41]|3:31 [2020-03-14 Sat 23:01]-[2020-03-14 Sat 23:54]|0:53 [2020-03-14 Sat 20:46]-[2020-03-14 Sat 23:01]|2:15 [2020-03-14 Sat 20:39]-[2020-03-14 Sat 20:46]|0:07 [2020-03-14 Sat 17:23]-[2020-03-14 Sat 20:39]|3:16 [2020-03-14 Sat 12:00]-[2020-03-14 Sat 15:53]|3:53 [2020-03-13 Fri 20:01]-[2020-03-13 Fri 23:01]|3:00 [2020-03-13 Fri 09:20]-[2020-03-13 Fri 11:58]|2:38 [2020-03-12 Thu 20:30]-[2020-03-12 Thu 23:29]|2:59 [2020-03-11 Wed 12:12]-[2020-03-11 Wed 13:18]|1:06 [2020-03-11 Wed 10:45]-[2020-03-11 Wed 11:09]|0:24 [2020-03-11 Wed 09:15]-[2020-03-11 Wed 10:45]|1:30 [2020-03-10 Tue 20:22]-[2020-03-11 Wed 00:09]|3:47 [2020-03-10 Tue 09:08]-[2020-03-10 Tue 13:44]|4:36 [2020-03-09 Mon 22:28]-[2020-03-09 Mon 23:32]|1:04 [2020-03-09 Mon 09:08]-[2020-03-09 Mon 11:59]|2:51 [2020-03-08 Sun 18:30]-[2020-03-08 Sun 21:29]|2:59 [2020-03-08 Sun 16:51]-[2020-03-08 Sun 18:08]|1:17 [2020-03-08 Sun 13:50]-[2020-03-08 Sun 15:36]|1:46 [2020-03-08 Sun 11:56]-[2020-03-08 Sun 13:28]|1:32 [2020-03-07 Sat 18:00]-[2020-03-07 Sat 21:36]|3:36 [2020-03-07 Sat 11:35]-[2020-03-07 Sat 16:09]|4:34 [2020-03-06 Fri 17:37]-[2020-03-06 Fri 21:48]|4:11 [2020-03-06 Fri 13:11]-[2020-03-06 Fri 14:16]|1:05 [2020-03-06 Fri 09:42]-[2020-03-06 Fri 12:39]|2:57 [2020-03-05 Thu 16:54]-[2020-03-05 Thu 21:34]|4:40 [2020-03-05 Thu 08:58]-[2020-03-05 Thu 13:24]|4:26 [2020-03-04 Wed 19:51]-[2020-03-04 Wed 22:51]|3:00 [2020-03-04 Wed 11:33]-[2020-03-04 Wed 12:31]|0:58 [2020-03-04 Wed 09:32]-[2020-03-04 Wed 11:01]|1:29 [2020-03-03 Tue 19:13]-[2020-03-03 Tue 21:46]|2:33 [2020-03-03 Tue 12:20]-[2020-03-03 Tue 14:58]|2:38 [2020-03-03 Tue 09:13]-[2020-03-03 Tue 11:57]|2:44 [2020-03-02 Mon 18:30]-[2020-03-02 Mon 18:50]|0:20 [2020-03-02 Mon 12:01]-[2020-03-02 Mon 14:43]|2:42 [2020-03-02 Mon 09:02]-[2020-03-02 Mon 11:30]|2:28 [2020-03-01 Sun 19:07]-[2020-03-01 Sun 21:25]|2:18 [2020-03-01 Sun 17:50]-[2020-03-01 Sun 18:41]|0:51 [2020-03-01 Sun 11:09]-[2020-03-01 Sun 15:15]|4:06 [2020-02-29 Sat 21:30]-[2020-02-29 Sat 22:16]|0:46 [2020-02-29 Sat 12:48]-[2020-02-29 Sat 19:17]|6:29 [2020-02-28 Fri 20:21]-[2020-02-28 Fri 23:10]|2:49 [2020-02-28 Fri 18:26]-[2020-02-28 Fri 19:22]|0:56 [2020-02-28 Fri 11:55]-[2020-02-28 Fri 12:02]|0:07 [2020-02-27 Thu 09:20]-[2020-02-27 Thu 10:57]|1:37 [2020-02-26 Wed 20:47]-[2020-02-26 Wed 23:44]|2:57 [2020-02-26 Wed 12:07]-[2020-02-26 Wed 13:40]|1:33 [2020-02-26 Wed 09:29]-[2020-02-26 Wed 11:00]|1:31 [2020-02-25 Tue 19:18]-[2020-02-25 Tue 22:51]|3:33 [2020-02-25 Tue 09:01]-[2020-02-25 Tue 10:42]|1:41 [2020-02-24 Mon 19:23]-[2020-02-25 Tue 00:15]|4:52 [2020-02-24 Mon 13:00]-[2020-02-24 Mon 13:36]|0:36 [2020-02-24 Mon 10:08]-[2020-02-24 Mon 12:39]|2:31 [2020-02-23 Sun 19:20]-[2020-02-23 Sun 20:48]|1:28 [2020-02-23 Sun 12:52]-[2020-02-23 Sun 16:45]|3:53 [2020-02-22 Sat 21:35]-[2020-02-23 Sun 00:25]|2:50 [2020-02-22 Sat 19:59]-[2020-02-22 Sat 21:03]|1:04 [2020-02-22 Sat 12:20]-[2020-02-22 Sat 18:35]|6:15 [2020-02-21 Fri 20:55]-[2020-02-22 Sat 00:30]|3:35 [2020-02-21 Fri 17:30]-[2020-02-21 Fri 18:51]|1:21 [2020-02-21 Fri 10:40]-[2020-02-21 Fri 16:40]|6:00 [2020-02-20 Thu 17:00]-[2020-02-20 Thu 23:33]|6:33 [2020-02-20 Thu 14:43]-[2020-02-20 Thu 15:08]|0:25 [2020-02-20 Thu 10:05]-[2020-02-20 Thu 13:54]|3:49 [2020-02-19 Wed 21:35]-[2020-02-20 Thu 00:36]|3:01 [2020-02-19 Wed 19:50]-[2020-02-19 Wed 21:30]|1:40 [2020-02-19 Wed 13:34]-[2020-02-19 Wed 18:15]|4:41 [2020-02-19 Wed 11:10]-[2020-02-19 Wed 13:34]|2:24 [2020-02-18 Tue 21:05]-[2020-02-19 Wed 00:27]|3:22 [2020-02-18 Tue 19:02]-[2020-02-18 Tue 20:13]|1:11 [2020-02-18 Tue 16:58]-[2020-02-18 Tue 18:36]|1:38 [2020-02-18 Tue 10:55]-[2020-02-18 Tue 15:21]|4:26 [2020-02-17 Mon 19:20]-[2020-02-18 Tue 00:12]|4:52 [2020-02-17 Mon 15:20]-[2020-02-17 Mon 18:00]|2:40 [2020-02-17 Mon 14:17]-[2020-02-17 Mon 15:09]|0:52 [2020-02-16 Sun 21:21]-[2020-02-17 Mon 00:52]|3:31 [2020-02-16 Sun 20:03]-[2020-02-16 Sun 20:14]|0:11 [2020-02-16 Sun 19:00]-[2020-02-16 Sun 19:30]|0:30 [2020-02-16 Sun 16:06]-[2020-02-16 Sun 18:38]|2:32 [2020-02-16 Sun 12:59]-[2020-02-16 Sun 14:37]|1:38 [2020-02-16 Sun 10:30]-[2020-02-16 Sun 12:22]|1:52 [2020-02-15 Sat 22:10]-[2020-02-15 Sat 23:52]|1:42 [2020-02-15 Sat 21:01]-[2020-02-15 Sat 21:50]|0:49 [2020-02-15 Sat 15:03]-[2020-02-15 Sat 18:34]|3:31 [2020-02-14 Fri 18:53]-[2020-02-15 Sat 04:33]|9:40 [2020-02-13 Thu 16:15]-[2020-02-13 Thu 17:21]|1:06 [2020-02-13 Thu 00:12]-[2020-02-13 Thu 01:45]|1:33 [2020-02-12 Wed 18:36]-[2020-02-12 Wed 22:30]|3:54 [2020-02-12 Wed 13:16]-[2020-02-12 Wed 14:55]|1:39 [2020-02-12 Wed 08:37]-[2020-02-12 Wed 12:20]|3:43 [2020-02-11 Tue 18:51]-[2020-02-11 Tue 21:54]|3:03 [2020-02-11 Tue 04:30]-[2020-02-11 Tue 08:09]|3:39 [2020-02-10 Mon 06:42]-[2020-02-10 Mon 07:28]|0:46 [2020-02-06 Thu 15:42]-[2020-02-06 Thu 22:08]|6:26 [2020-02-01 Sat 15:05]-[2020-02-01 Sat 15:36]|0:31 [2020-01-23 Thu 17:06]-[2020-01-23 Thu 18:51]|1:45 [2020-01-22 Wed 20:53]-[2020-01-22 Wed 21:05]|0:12 [2020-01-22 Wed 13:40]-[2020-01-22 Wed 20:20]|6:40 [2020-01-21 Tue 15:33]-[2020-01-21 Tue 16:57]|1:24 [2020-01-17 Fri 19:13]-[2020-01-17 Fri 23:00]|3:47 [2020-01-11 Sat 10:56]-[2020-01-11 Sat 18:24]|7:28 [2020-01-10 Fri 22:20]-[2020-01-10 Fri 23:56]|1:36 [2020-01-10 Fri 09:40]-[2020-01-10 Fri 13:20]|3:40 [2020-01-09 Thu 20:10]-[2020-01-09 Thu 22:15]|2:05 [2020-01-09 Thu 08:50]-[2020-01-09 Thu 09:55]|1:05 [2020-01-08 Wed 19:21]-[2020-01-09 Thu 00:42]|5:21 [2020-01-08 Wed 09:20]-[2020-01-08 Wed 18:12]|8:52 [2020-01-07 Tue 16:31]-[2020-01-07 Tue 18:31]|2:00 [2020-01-07 Tue 08:55]-[2020-01-07 Tue 12:49]|3:54 [2020-01-06 Mon 22:30]-[2020-01-06 Mon 23:31]|1:01 [2020-01-06 Mon 09:20]-[2020-01-06 Mon 11:56]|2:36 [2020-01-04 Sat 20:25]-[2020-01-04 Sat 21:09]|0:44 [2020-01-04 Sat 09:37]-[2020-01-04 Sat 13:22]|3:45 [2020-01-03 Fri 21:13]-[2020-01-03 Fri 23:59]|2:46 [2020-01-03 Fri 18:13]-[2020-01-03 Fri 19:13]|1:00 [2020-01-03 Fri 12:08]-[2020-01-03 Fri 14:12]|2:04 [2020-01-02 Thu 09:35]-[2020-01-02 Thu 11:58]|2:23 [2019-12-29 Sun 02:12]-[2019-12-29 Sun 05:42]|3:30 [2019-12-26 Thu 16:59]-[2019-12-26 Thu 19:51]|2:52 [2019-12-23 Mon 05:03]-[2019-12-23 Mon 05:31]|0:28 [2019-12-23 Mon 03:02]-[2019-12-23 Mon 04:03]|1:01 [2019-12-22 Sun 16:51]-[2019-12-22 Sun 18:40]|1:49 [2019-12-21 Sat 19:23]-[2019-12-22 Sun 00:19]|4:56 [2019-12-20 Fri 14:10]-[2019-12-20 Fri 17:11]|3:01 [2019-12-19 Thu 23:20]-[2019-12-19 Thu 23:38]|0:18 [2019-12-18 Wed 10:47]-[2019-12-18 Wed 12:47]|2:00 [2019-12-09 Mon 10:47]-[2019-12-09 Mon 13:21]|2:34 [2019-12-08 Sun 17:47]-[2019-12-09 Sun 00:28]|6:41 [2019-12-07 Sat 16:07]-[2019-12-07 Sat 23:15]|7:08 [2019-12-06 Fri 19:04]-[2019-12-06 Fri 20:54]|1:50 [2019-12-04 Wed 18:06]-[2019-12-05 Thu 00:42]|6:36 [2019-12-04 Wed 12:36]-[2019-12-04 Wed 13:05]|0:29 [2019-12-03 Tue 22:18]-[2019-12-03 Tue 23:27]|1:09 [2019-12-03 Tue 21:21]-[2019-12-03 Tue 22:18]|0:57 [2019-12-03 Tue 12:40]-[2019-12-03 Tue 15:25]|2:45 [2019-12-02 Mon 20:06]-[2019-12-02 Mon 23:30]|3:24 [2019-12-01 Sun 22:07]-[2019-12-02 Mon 01:06]|2:59 [2019-12-01 Sun 18:59]-[2019-12-01 Sun 19:59]|1:00 [2019-11-30 Sat 14:19]-[2019-11-30 Sat 15:15]|0:56 [2019-11-29 Fri 20:07]-[2019-11-29 Fri 21:24]|1:17 [2019-11-29 Fri 11:51]-[2019-11-29 Fri 12:10]|0:19 [2019-11-28 Thu 09:30]-[2019-11-28 Thu 15:00]|5:30 [2019-11-26 Tue 09:15]-[2019-11-26 Tue 12:57]|3:42 [2019-11-25 Mon 10:35]-[2019-11-25 Mon 13:02]|2:27 [2019-11-20 Wed 12:08]-[2019-11-20 Wed 14:29]|2:21 [2019-11-20 Wed 09:25]-[2019-11-20 Wed 11:32]|2:07 [2019-11-19 Tue 11:45]-[2019-11-19 Tue 14:42]|2:57 [2019-11-13 Wed 20:52]-[2019-11-13 Wed 22:25]|1:33 [2019-11-12 Tue 19:47]-[2019-11-12 Tue 21:14]|1:27 [2019-11-12 Tue 09:30]-[2019-11-12 Tue 11:49]|2:19 [2019-11-11 Mon 21:03]-[2019-11-11 Mon 23:03]|2:00 [2019-11-10 Sun 21:45]-[2019-11-10 Sun 23:25]|1:40 [2019-10-31 Thu 09:20]-[2019-10-31 Thu 11:07]|1:47 [2019-10-30 Wed 10:35]-[2019-10-30 Wed 13:55]|3:20 [2019-10-29 Tue 22:35]-[2019-10-30 Wed 00:13]|1:38 [2019-10-29 Tue 09:33]-[2019-10-29 Tue 11:33]|2:00 [2019-10-28 Mon 21:52]-[2019-10-29 Tue 00:14]|2:22 [2019-10-28 Mon 18:23]-[2019-10-28 Mon 19:23]|1:00 [2019-10-28 Mon 09:07]-[2019-10-28 Mon 15:10]|6:03 [2019-10-27 Sun 20:44]-[2019-10-28 Mon 00:48]|4:04 [2019-10-27 Sun 14:17]-[2019-10-27 Sun 15:42]|1:25 [2019-10-27 Sun 12:15]-[2019-10-27 Sun 13:33]|1:18 [2019-10-26 Sat 13:53]-[2019-10-26 Sat 14:10]|0:17 [2019-10-26 Sat 10:15]-[2019-10-26 Sat 10:58]|0:43 [2019-10-25 Fri 15:12]-[2019-10-25 Fri 17:55]|2:43 [2019-10-25 Fri 09:10]-[2019-10-25 Fri 09:59]|0:49 [2019-10-24 Thu 22:23]-[2019-10-25 Fri 00:05]|1:42 [2019-10-24 Thu 18:45]-[2019-10-24 Thu 21:21]|2:36 [2019-10-24 Thu 09:03]-[2019-10-24 Thu 10:47]|1:44 [2019-10-23 Wed 21:24]-[2019-10-24 Wed 23:49]|2:25 [2019-10-23 Wed 09:09]-[2019-10-23 Wed 10:55]|1:46 [2019-10-22 Tue 22:35]-[2019-10-23 Wed 00:13]|1:33 [2019-10-22 Tue 19:10]-[2019-10-22 Tue 21:38]|2:28 [2019-10-22 Tue 09:18]-[2019-10-22 Tue 12:02]|2:44 [2019-10-21 Mon 23:39]-[2019-10-21 Mon 23:49]|0:10 [2019-10-21 Mon 17:23]-[2019-10-21 Mon 18:28]|1:05 [2019-10-21 Mon 09:05]-[2019-10-21 Mon 13:58]|4:53 [2019-10-20 Sun 23:27]-[2019-10-21 Mon 00:00]|0:33 [2019-10-20 Sun 19:32]-[2019-10-20 Sun 20:23]|0:51 [2019-10-20 Sun 12:55]-[2019-10-20 Sun 14:45]|1:50 [2019-10-19 Sat 19:25]-[2019-10-19 Sat 20:45]|1:20 [2019-10-19 Sat 16:12]-[2019-10-19 Sat 18:47]|2:35 [2019-10-17 Thu 19:18]-[2019-10-17 Thu 22:55]|3:37 [2019-10-17 Thu 09:30]-[2019-10-17 Thu 11:42]|2:12 [2019-10-16 Wed 14:52]-[2019-10-16 Wed 14:59]|0:07 [2019-10-16 Wed 09:08]-[2019-10-16 Wed 10:08]|1:00 [2019-10-15 Tue 22:35]-[2019-10-15 Tue 23:30]|0:55 [2019-10-15 Tue 19:30]-[2019-10-15 Tue 21:40]|2:10 [2019-10-15 Tue 09:10]-[2019-10-15 Tue 12:56]|3:46 [2019-10-14 Mon 19:51]-[2019-10-14 Mon 23:10]|3:19 [2019-10-14 Mon 15:57]-[2019-10-14 Mon 17:23]|1:26 [2019-10-12 Sat 20:05]-[2019-10-12 Sat 21:33]|1:28 [2019-10-12 Sat 15:56]-[2019-10-12 Sat 16:07]|0:11 [2019-10-12 Sat 10:31]-[2019-10-12 Sat 12:31]|2:00 [2019-10-11 Fri 19:55]-[2019-10-11 Fri 22:34]|2:39 [2019-10-11 Fri 17:55]-[2019-10-11 Fri 19:28]|1:33 [2019-10-11 Fri 14:35]-[2019-10-11 Fri 14:47]|0:12 [2019-10-11 Fri 09:10]-[2019-10-11 Fri 11:10]|2:00 [2019-10-10 Thu 20:26]-[2019-10-10 Thu 21:48]|1:22 [2019-10-10 Thu 17:26]-[2019-10-10 Thu 19:40]|2:14 [2019-10-10 Thu 12:15]-[2019-10-10 Thu 14:37]|2:22 [2019-10-10 Thu 08:50]-[2019-10-10 Thu 11:29]|2:39 [2019-10-09 Wed 20:16]-[2019-10-09 Wed 20:55]|0:39 [2019-10-09 Wed 16:46]-[2019-10-09 Wed 17:55]|1:09 [2019-10-09 Wed 11:27]-[2019-10-09 Wed 13:38]|2:11 [2019-09-29 Sun 17:01]-[2019-09-29 Sun 17:23]|0:22 [2019-09-27 Fri 08:56]-[2019-09-27 Fri 10:20]|1:24 [2019-09-26 Thu 21:25]-[2019-09-26 Thu 23:38]|2:13 [2019-09-25 Wed 21:55]-[2019-09-25 Wed 22:18]|0:23 [2019-09-25 Wed 12:20]-[2019-09-25 Wed 15:22]|3:02 [2019-09-25 Wed 09:20]-[2019-09-25 Wed 11:25]|2:05 [2019-09-24 Tue 22:10]-[2019-09-24 Tue 23:16]|1:06 [2019-09-24 Tue 12:05]-[2019-09-24 Tue 13:49]|1:44 [2019-09-24 Tue 01:17]-[2019-09-24 Tue 02:15]|0:58 [2019-09-23 Mon 21:26]-[2019-09-23 Mon 22:57]|1:31 [2019-09-22 Sun 14:52]-[2019-09-22 Sun 18:51]|3:59 [2019-09-21 Sat 16:50]-[2019-09-21 Sat 17:55]|1:05 [2019-09-21 Sat 12:31]-[2019-09-21 Sat 15:44]|3:13 [2019-09-20 Fri 22:05]-[2019-09-21 Sat 00:05]|2:00 [2019-09-20 Fri 14:38]-[2019-09-20 Fri 17:20]|2:42 [2019-09-20 Fri 11:42]-[2019-09-20 Fri 12:48]|1:06 [2019-09-19 Thu 21:14]-[2019-09-20 Fri 00:33]|3:19 [2019-09-19 Thu 09:15]-[2019-09-19 Thu 11:14]|1:59 [2019-09-18 Wed 20:55]-[2019-09-18 Wed 23:25]|2:30 [2019-09-17 Tue 22:05]-[2019-09-17 Tue 22:56]|0:51 [2019-09-14 Sat 14:20]-[2019-09-14 Sat 16:57]|2:37 [2019-09-12 Thu 09:31]-[2019-09-12 Thu 10:36]|1:05 [2019-09-11 Wed 22:40]-[2019-09-12 Thu 01:41]|3:01 [2019-09-11 Wed 12:11]-[2019-09-11 Wed 15:16]|3:05 [2019-09-11 Wed 09:19]-[2019-09-11 Wed 11:49]|2:30 [2019-09-10 Tue 20:60]-[2019-09-10 Tue 23:35]|2:35 [2019-09-10 Tue 16:30]-[2019-09-10 Tue 19:35]|3:05 [2019-09-10 Tue 14:30]-[2019-09-10 Tue 14:41]|0:11 [2019-09-10 Tue 10:27]-[2019-09-10 Tue 11:27]|1:00 [2019-09-09 Mon 09:29]-[2019-09-09 Mon 12:45]|3:16 [2019-09-08 Sun 23:07]-[2019-09-09 Mon 00:46]|1:39 [2019-09-08 Sun 15:10]-[2019-09-08 Sun 21:07]|5:57 [2019-09-06 Fri 12:05]-[2019-09-06 Fri 13:40]|1:35 [2019-09-04 Wed 20:01]-[2019-09-04 Wed 23:19]|3:18 [2019-09-04 Wed 17:01]-[2019-09-04 Wed 20:00]|2:59 [2019-09-04 Wed 09:12]-[2019-09-04 Wed 12:12]|3:00 [2019-09-03 Tue 19:40]-[2019-09-04 Wed 01:20]|5:40 [2019-09-03 Tue 11:12]-[2019-09-03 Tue 14:46]|3:34 [2019-09-03 Tue 10:00]-[2019-09-03 Tue 10:39]|0:39 [2019-09-02 Mon 19:55]-[2019-09-03 Tue 00:00]|4:05 [2019-09-02 Mon 09:53]-[2019-09-02 Mon 13:37]|3:44 [2019-09-01 Sun 19:10]-[2019-09-02 Mon 00:46]|5:36 [2019-08-31 Sat 11:21]-[2019-08-31 Sat 11:44]|0:23 [2019-08-30 Fri 19:21]-[2019-08-30 Fri 23:49]|4:28 [2019-08-30 Fri 15:21]-[2019-08-30 Fri 16:11]|0:50 [2019-08-29 Thu 14:10]-[2019-08-29 Thu 15:16]|1:06 [2019-08-25 Sun 14:15]-[2019-08-25 Sun 21:55]|7:40 [2019-08-22 Thu 15:01]-[2019-08-22 Thu 19:39]|4:38 [2019-08-22 Thu 09:12]-[2019-08-22 Thu 13:30]|4:18 [2019-08-21 Wed 21:15]-[2019-08-22 Thu 00:17]|3:02 [2019-08-21 Wed 12:21]-[2019-08-21 Wed 14:39]|2:18 [2019-08-20 Tue 10:57]-[2019-08-20 Tue 15:04]|4:07 [2019-08-19 Mon 09:19]-[2019-08-19 Mon 13:32]|4:13
This section lists the data on the minute each exercise was considered complete. (Local time.) For statistical purposes the beginning of each exercise is considered to be the completion time of the previous one. For the first exercise, the beginning time is .
Figure 1.1 Tree with the values of subcombinations [2019-08-20 Tue 14:35] Exercise 1.1 Interpreter result [2019-08-20 Tue 14:23] Exercise 1.2 Prefix form [2019-08-20 Tue 14:25] Exercise 1.3 Sum of squares [2020-02-28 Fri 12:01] Exercise 1.4 Compound expressions [2019-08-20 Tue 14:39] Exercise 1.5 Ben's test [2019-08-20 Tue 14:50] Exercise 1.6 If is a special form [2019-08-21 Wed 14:05] Exercise 1.7 Good enough? [2019-08-22 Thu 12:52] Exercise 1.8 Newton's method [2019-08-22 Thu 17:36] Exercise 1.9 Iterative or recursive? [2019-08-29 Thu 15:14] Exercise 1.10 Ackermann's function [2019-08-25 Sun 18:31] Exercise 1.11 Recursive vs iterative [2019-08-25 Sun 19:25] Exercise 1.12 Recursive Pascal's triangle [2019-08-25 Sun 19:42] Exercise 1.13 Fibonacci [2019-08-25 Sun 23:04] Exercise 1.14 ~count-change~ [2019-08-30 Fri 16:09] Exercise 1.15 ~sine~ [2019-08-30 Fri 22:34] Exercise 1.16 Iterative exponentiation [2019-08-30 Fri 23:20] Exercise 1.17 Fast multiplication [2019-08-30 Fri 23:48] Exercise 1.18 Iterative multiplication [2019-08-31 Sat 11:43] Exercise 1.19 Logarithmic Fibonacci [2019-09-01 Sun 20:42] Exercise 1.20 GCD applicative vs normal [2019-09-01 Sun 23:04] Exercise 1.21 ~smallest-divisor~ [2019-09-01 Sun 23:43] Exercise 1.22 ~timed-prime-test~ [2019-09-02 Mon 00:44] Exercise 1.23 ~test-divisor~ [2019-09-02 Mon 09:56] Exercise 1.24 Fermat method [2019-09-02 Mon 11:32] Exercise 1.25 ~expmod~ [2019-09-02 Mon 12:46] Exercise 1.26 ~square~ vs ~mul~ [2019-09-02 Mon 12:50] Exercise 1.27 Carmichael numbers [2019-09-02 Mon 20:50] Exercise 1.28 Miller-Rabin [2019-09-02 Mon 23:28] Exercise 1.29 Simpson's integral [2019-09-03 Tue 10:36] Exercise 1.30 Iterative sum [2019-09-03 Tue 11:19] Exercise 1.31 Product [2019-09-03 Tue 11:59] Exercise 1.32 Accumulator [2019-09-03 Tue 12:23] Exercise 1.33 ~filtered-accumulate~ [2019-09-03 Tue 14:36] Exercise 1.34 lambda [2019-09-03 Tue 14:44] Exercise 1.35 Fixed-point [2019-09-03 Tue 21:05] Exercise 1.36 Fixed-point-with-dampening [2019-09-03 Tue 21:55] Exercise 1.37 Cont-frac [2019-09-04 Wed 11:35] Exercise 1.38 Euler constant [2019-09-04 Wed 11:35] Exercise 1.39 Tan-cf [2019-09-04 Wed 12:11] Exercise 1.40 Newtons-method [2019-09-04 Wed 17:06] Exercise 1.41 Double-double [2019-09-04 Wed 17:21] Exercise 1.42 Compose [2019-09-04 Wed 17:27] Exercise 1.43 Repeated [2019-09-04 Wed 17:54] Exercise 1.44 Smoothing [2019-09-04 Wed 20:17] Exercise 1.45 Nth root [2019-09-04 Wed 21:37] Exercise 1.46 ~iterative-improve~ [2019-09-04 Wed 22:25] Exercise 2.1 ~make-rat~ [2019-09-06 Fri 13:00] Exercise 2.2 ~make-segment~ [2019-09-06 Fri 13:34] Exercise 2.3 ~make-rectangle~ [2019-09-08 Sun 17:58] Exercise 2.4 ~cons~ lambda [2019-09-08 Sun 18:08] Exercise 2.5 ~cons~ pow [2019-09-08 Sun 19:07] Exercise 2.6 Church Numerals [2019-09-08 Sun 19:41] Exercise 2.7 ~make-interval~ [2019-09-08 Sun 20:09] Exercise 2.8 ~sub-interval~ [2019-09-08 Sun 23:07] Exercise 2.9 ~interval-width~ [2019-09-08 Sun 23:15] Exercise 2.10 Div interval better [2019-09-08 Sun 23:30] Exercise 2.11 Mul interval nine cases [2019-09-09 Mon 00:45] Exercise 2.12 ~make-center-percent~ [2019-09-09 Mon 10:11] Exercise 2.13 Formula for tolerance [2019-09-09 Mon 10:16] Exercise 2.14 Parallel resistors [2019-09-09 Mon 11:24] Exercise 2.15 Better intervals [2019-09-09 Mon 11:34] Exercise 2.16 Interval arithmetic [2019-09-09 Mon 11:37] Exercise 2.17 ~last-pair~ [2019-09-10 Tue 10:48] Exercise 2.18 ~reverse~ [2019-09-10 Tue 10:57] Exercise 2.19 Coin values [2019-09-10 Tue 11:27] Exercise 2.20 Dotted-tail notation [2019-09-10 Tue 18:55] Exercise 2.21 Map square list [2019-09-10 Tue 19:14] Exercise 2.22 Wrong list order [2019-09-10 Tue 19:24] Exercise 2.23 ~for-each~ [2019-09-10 Tue 19:33] Exercise 2.24 List plot result [2019-09-10 Tue 22:13] Exercise 2.25 ~caddr~ [2019-09-10 Tue 23:07] Exercise 2.26 ~append~ ~cons~ ~list~ [2019-09-10 Tue 23:23] Exercise 2.27 Deep reverse [2019-09-11 Wed 09:47] Exercise 2.28 Fringe [2019-09-11 Wed 10:24] Exercise 2.29 Mobile [2019-09-11 Wed 11:47] Exercise 2.30 ~square-tree~ [2019-09-11 Wed 14:11] Exercise 2.31 Tree-map square tree [2019-09-11 Wed 14:38] Exercise 2.32 Subsets [2019-09-11 Wed 14:53] Exercise 2.33 Map append length [2019-09-11 Wed 23:53] Exercise 2.34 Horners rule [2019-09-12 Thu 00:01] Exercise 2.35 ~count-leaves-accumulate~ [2019-09-12 Thu 00:17] Exercise 2.36 ~accumulate-n~ [2019-09-12 Thu 00:26] Exercise 2.37 ~matrix-*-vector~ [2019-09-12 Thu 00:50] Exercise 2.38 ~fold-left~ [2019-09-12 Thu 09:45] Exercise 2.39 Reverse ~fold-right~ ~fold-left~ [2019-09-12 Thu 09:52] Exercise 2.40 ~unique-pairs~ [2019-09-12 Thu 10:34] Exercise 2.41 ~triple-sum~ [2019-09-14 Sat 15:15] Figure 2.8 A solution to the eight-queens puzzle [2019-09-14 Sat 15:17] Exercise 2.42 k-queens [2019-09-17 Tue 22:27] Exercise 2.43 Slow k-queens [2019-09-17 Tue 22:55] Exercise 2.44 ~up-split~ [2019-09-23 Mon 22:54] Exercise 2.45 ~split~ [2019-09-24 Tue 01:37] Exercise 2.46 ~make-vect~ [2019-09-20 Fri 12:48] Exercise 2.47 ~make-frame~ [2019-09-20 Fri 14:48] Exercise 2.48 ~make-segment~ [2019-09-20 Fri 16:06] Exercise 2.49 ~segments->painter~ applications [2019-09-20 Fri 23:10] Exercise 2.50 ~flip-horiz~ ~rotate270~ ~rotate180~ [2019-09-20 Fri 23:37] Exercise 2.51 ~below~ [2019-09-22 Sun 18:50] Exercise 2.52 Modify square-limit [2019-09-24 Tue 12:25] Exercise 2.53 Quote introduction [2019-09-24 Tue 12:36] Exercise 2.54 ~equal?~ implementation [2019-09-24 Tue 13:48] Exercise 2.55 Quote quote [2019-09-24 Tue 13:48] Exercise 2.56 Differentiation exponentiation [2019-09-24 Tue 23:14] Exercise 2.57 Differentiate three sum [2019-09-25 Wed 12:40] Exercise 2.58 ~infix-notation~ [2019-09-25 Wed 15:21] Exercise 2.59 ~union-set~ [2019-09-25 Wed 22:00] Exercise 2.60 ~duplicate-set~ [2019-09-25 Wed 22:17] Exercise 2.61 Sets as ordered lists [2019-09-26 Thu 21:44] Exercise 2.62 ~ordered-union-set~ (ordered list) [2019-09-26 Thu 21:38] Exercise 2.63 ~tree->list~ (binary search tree) [2019-09-26 Thu 23:37] Exercise 2.64 Balanced tree [2019-09-29 Sun 17:22] Exercise 2.65 ~tree-union-set~ [2019-10-09 Wed 12:13] Exercise 2.66 Tree-lookup [2019-10-09 Wed 13:03] Exercise 2.67 Huffman decode a simple message [2019-10-09 Wed 20:20] Exercise 2.68 Huffman encode a simple message [2019-10-09 Wed 20:53] Exercise 2.69 Generate Huffman tree [2019-10-10 Thu 11:28] Exercise 2.70 Generate a tree and encode a song [2019-10-10 Thu 13:11] Exercise 2.71 Huffman tree for 5 and 10 [2019-10-10 Thu 19:22] Exercise 2.72 Huffman order of growth [2019-10-10 Thu 20:34] Exercise 2.73 Data-driven ~deriv~ [2019-10-11 Fri 11:05] Exercise 2.74 Insatiable Enterprises [2019-10-11 Fri 20:56] Exercise 2.75 ~make-from-mag-ang~ message passing [2019-10-11 Fri 21:24] Exercise 2.76 Types or functions? [2019-10-11 Fri 21:29] Exercise 2.77 Generic algebra magnitude [2019-10-12 Sat 16:01] Exercise 2.78 Ordinary numbers for Scheme [2019-10-12 Sat 21:06] Exercise 2.79 Generic equality [2019-10-14 Mon 15:58] Exercise 2.80 Generic arithmetic zero? [2019-10-14 Mon 17:18] Exercise 2.81 Coercion to itself [2019-10-15 Tue 11:16] Exercise 2.82 Three argument coercion [2019-10-15 Tue 21:40] Exercise 2.83 Numeric Tower and (raise) [2019-10-16 Wed 14:53] Exercise 2.84 ~raise-type~ in ~apply-generic~ [2019-10-17 Thu 11:39] Exercise 2.85 Dropping a type [2019-10-20 Sun 13:47] Exercise 2.86 Compound complex numbers [2019-10-20 Sun 20:22] Exercise 2.87 Generalized zero? [2019-10-21 Mon 18:25] Exercise 2.88 Subtraction of polynomials [2019-10-22 Tue 09:55] Exercise 2.89 Dense term-lists [2019-10-22 Tue 11:55] Exercise 2.90 Dense polynomials as a package [2019-10-22 Tue 21:31] Exercise 2.91 Division of polynomials [2019-10-23 Wed 00:11] Exercise 2.92 Add, mul for different variables [2019-10-27 Sun 13:32] Exercise 2.93 Rational polynomials [2019-10-27 Sun 22:36] Exercise 2.94 GCD for polynomials [2019-10-28 Mon 00:47] Exercise 2.95 Non-integer problem [2019-10-28 Mon 11:35] Exercise 2.96 Integerizing factor [2019-10-28 Mon 19:23] Exercise 2.97 Reduction of polynomials [2019-10-29 Tue 00:12] Exercise 3.1 Accumulators [2019-10-29 Tue 10:24] Exercise 3.2 Make-monitored [2019-10-29 Tue 11:03] Exercise 3.3 Password protection [2019-10-29 Tue 11:17] Exercise 3.4 Call-the-cops [2019-10-29 Tue 11:32] Exercise 3.5 Monte-Carlo [2019-10-30 Wed 00:12] Exercise 3.6 reset a prng [2019-10-30 Wed 11:42] Exercise 3.7 Joint accounts [2019-10-30 Wed 13:07] Exercise 3.8 Right-to-left vs Left-to-right [2019-10-30 Wed 13:45] Exercise 3.9 Environment structures [2019-11-20 Wed 14:28] Exercise 3.10 ~let~ to create state variables [2019-11-25 Mon 12:52] Exercise 3.11 Internal definitions [2019-11-26 Tue 12:44] Exercise 3.12 Drawing ~append!~ [2019-11-29 Fri 11:55] Exercise 3.13 ~make-cycle~ [2019-11-29 Fri 12:09] Exercise 3.14 ~mystery~ [2019-11-29 Fri 21:23] Exercise 3.15 ~set-to-wow!~ [2019-12-01 Sun 19:59] Exercise 3.16 ~count-pairs~ [2019-12-02 Mon 00:05] Exercise 3.17 Real ~count-pairs~ [2019-12-02 Mon 00:47] Exercise 3.18 Finding cycles [2019-12-02 Mon 01:04] Exercise 3.19 Efficient finding cycles [2019-12-02 Mon 23:29] Exercise 3.20 Procedural ~set-car!~ [2019-12-03 Tue 14:40] Exercise 3.21 Queues [2019-12-03 Tue 15:10] Exercise 3.22 Procedural queue [2019-12-03 Tue 22:13] Exercise 3.23 Dequeue [2019-12-03 Tue 23:24] Exercise 3.24 Tolerant tables [2019-12-04 Wed 18:07] Exercise 3.25 Multilevel tables [2019-12-06 Fri 20:35] Exercise 3.26 Binary tree table [2019-12-06 Fri 20:53] Exercise 3.27 Memoization [2019-12-07 Sat 16:08] Exercise 3.28 Primitive or-gate [2019-12-08 Sun 23:43] Exercise 3.29 Compound or-gate [2019-12-08 Sun 23:45] Exercise 3.30 Ripple-carry adder [2019-12-08 Sun 23:58] Exercise 3.31 Initial propagation [2019-12-09 Mon 00:16] Exercise 3.32 Order matters [2019-12-09 Mon 00:26] Exercise 3.33 Averager constraint [2019-12-18 Wed 11:29] Exercise 3.34 Wrong squarer [2019-12-18 Wed 12:30] Exercise 3.35 Correct squarer [2019-12-18 Wed 12:47] Exercise 3.36 Connector environment diagram [2019-12-21 Sat 20:27] Exercise 3.37 Expression-based constraints [2019-12-21 Sat 21:20] Exercise 3.38 Timing [2019-12-21 Sat 22:48] Exercise 3.39 Serializer [2019-12-23 Mon 05:11] Exercise 3.40 Three parallel multiplications [2019-12-29 Sun 04:32] Exercise 3.41 Better protected account [2020-01-02 Thu 10:02] Exercise 3.42 Saving on serializers [2020-01-02 Thu 10:35] Exercise 3.43 Multiple serializations [2020-01-02 Thu 11:33] Exercise 3.44 Transfer money [2020-01-02 Thu 11:40] Exercise 3.45 New plus old serializers [2020-01-02 Thu 11:46] Exercise 3.46 Broken test-and-set! [2020-01-02 Thu 11:56] Exercise 3.47 Semaphores [2020-01-03 Fri 12:59] Exercise 3.48 Serialized-exchange deadlock [2020-01-03 Fri 13:30] Exercise 3.49 When numbering does not work [2020-01-03 Fri 13:41] Exercise 3.50 ~stream-map~ multiple arguments [2020-01-03 Fri 21:18] Exercise 3.51 ~stream-show~ [2020-01-03 Fri 21:28] Exercise 3.52 Streams with mind-boggling [2020-01-03 Fri 22:17] Exercise 3.53 Stream power of two [2020-01-03 Fri 22:40] Exercise 3.54 ~mul-streams~ [2020-01-03 Fri 22:47] Exercise 3.55 Streams partial-sums [2020-01-03 Fri 23:05] Exercise 3.56 Hamming's streams-merge [2020-01-03 Fri 23:26] Exercise 3.57 Exponential additions fibs [2020-01-03 Fri 23:36] Exercise 3.58 Cryptic stream [2020-01-03 Fri 23:50] Exercise 3.59 Power series [2020-01-04 Sat 09:58] Exercise 3.60 ~mul-series~ [2020-01-04 Sat 11:07] Exercise 3.61 ~power-series-inversion~ [2020-01-04 Sat 13:13] Exercise 3.62 ~div-series~ [2020-01-04 Sat 13:21] Exercise 3.63 ~sqrt-stream~ [2020-01-04 Sat 20:32] Exercise 3.64 ~stream-limit~ [2020-01-06 Mon 09:38] Exercise 3.65 Approximating logarithm [2020-01-06 Mon 10:34] Exercise 3.66 Lazy pairs [2020-01-06 Mon 22:55] Exercise 3.67 All possible pairs [2020-01-06 Mon 23:09] Exercise 3.68 ~pairs-louis~ [2020-01-06 Mon 23:26] Exercise 3.69 ~triples~ [2020-02-17 Mon 20:10] Exercise 3.70 ~merge-weighted~ [2020-01-07 Tue 11:58] Exercise 3.71 Ramanujan numbers [2020-01-07 Tue 12:49] Exercise 3.72 Ramanujan 3-numbers [2020-01-08 Wed 10:27] Figure 3.32 Integral-signals [2020-01-08 Wed 10:59] Exercise 3.73 RC-circuit [2020-01-08 Wed 13:09] Exercise 3.74 Zero-crossings [2020-01-08 Wed 16:50] Exercise 3.75 Filtering signals [2020-01-08 Wed 18:11] Exercise 3.76 ~stream-smooth~ [2020-01-08 Wed 19:56] Exercise 3.77 Streams integral [2020-01-08 Wed 20:51] Exercise 3.78 Second order differential equation [2020-01-08 Wed 21:47] Exercise 3.79 General second-order ode [2020-01-08 Wed 21:57] Figure 3.36 [2020-01-08 Wed 23:21] Exercise 3.80 RLC circuit [2020-01-08 Wed 23:40] Exercise 3.81 Generator-in-streams [2020-01-09 Thu 00:37] Exercise 3.82 Streams Monte-Carlo [2020-01-09 Thu 09:42] Exercise 4.1 ~list-of-values~ ordered [2020-01-09 Thu 20:11] Exercise 4.2 Application before assignments [2020-01-09 Thu 20:41] Exercise 4.3 Data-directed eval [2020-01-09 Thu 21:24] Exercise 4.4 ~eval-and~ and ~eval-or~ [2020-01-09 Thu 22:14] Exercise 4.5 ~cond~ with arrow [2020-01-22 Wed 16:36] Exercise 4.6 Implementing let [2020-01-22 Wed 17:03] Exercise 4.7 Implementing let* [2020-01-22 Wed 18:09] Exercise 4.8 Implementing named let [2020-01-22 Wed 19:50] Exercise 4.9 Implementing until [2020-01-23 Thu 18:06] Exercise 4.10 Modifying syntax [2020-02-06 Thu 22:08] Exercise 4.11 Environment as a list of bindings [2020-02-11 Tue 06:58] Exercise 4.12 Better abstractions setting value [2020-02-11 Tue 19:40] Exercise 4.13 Implementing ~make-unbound!~ [2020-02-12 Wed 08:52] Exercise 4.14 Meta map versus built-in map [2020-02-12 Wed 08:58] Exercise 4.15 The ~halts?~ predicate [2020-02-12 Wed 09:24] Exercise 4.16 Simultaneous internal definitions [2020-02-12 Wed 13:17] Exercise 4.17 Environment for internal definitions [2020-02-12 Wed 14:09] Exercise 4.18 Alternative scanning [2020-02-12 Wed 14:35] Exercise 4.19 Mutual simultaneous definitions [2020-02-12 Wed 19:52] Exercise 4.20 ~letrec~ [2020-02-13 Thu 00:49] Exercise 4.21 Y-combinator [2020-02-13 Thu 01:07] Exercise 4.22 Extending evaluator to support ~let~ [2020-02-14 Fri 19:33] Exercise 4.23 Analysing sequences [2020-02-14 Fri 19:40] Exercise 4.24 Analysis time test [2020-02-14 Fri 20:12] Exercise 4.25 Lazy factorial [2020-02-14 Fri 21:01] Exercise 4.26 ~unless~ as a special form [2020-02-15 Sat 04:32] Exercise 4.27 Mutation in lazy interpreters [2020-02-15 Sat 16:54] Exercise 4.28 Eval before applying [2020-02-15 Sat 17:01] Exercise 4.29 Lazy eval slow without memoization [2020-02-15 Sat 17:51] Exercise 4.30 Lazy sequences [2020-02-15 Sat 21:32] Exercise 4.31 Lazy arguments with syntax extension [2020-02-15 Sat 23:44] Exercise 4.32 Streams versus lazy lists [2020-02-16 Sun 11:49] Exercise 4.33 Quoted lazy lists [2020-02-16 Sun 14:09] Exercise 4.34 Printing lazy lists [2020-02-16 Sun 19:25] Exercise 4.35 Pythagorean triples [2020-02-17 Mon 17:25] Exercise 4.36 Infinite Pythagorean triples [2020-02-17 Mon 20:26] Exercise 4.37 Another method for triples [2020-02-17 Mon 21:17] Exercise 4.38 Logical puzzle - Not same floor [2020-02-17 Mon 21:56] Exercise 4.39 Order of restrictions [2020-02-17 Mon 22:01] Exercise 4.40 People to floor assignment [2020-02-17 Mon 22:29] Exercise 4.41 Ordinary Scheme floor problem [2020-02-18 Tue 00:12] Exercise 4.42 The liars puzzle [2020-02-18 Tue 12:16] Exercise 4.43 Problematical Recreations [2020-02-18 Tue 13:31] Exercise 4.44 Nondeterministic eight queens [2020-02-18 Tue 15:17] Exercise 4.45 Five parses [2020-02-18 Tue 19:45] Exercise 4.46 Order of parsing [2020-02-18 Tue 19:55] Exercise 4.47 Parse verb phrase by Louis [2020-02-18 Tue 20:13] Exercise 4.48 Extending the grammar [2020-02-18 Tue 21:06] Exercise 4.49 Alyssa's generator [2020-02-18 Tue 21:51] Exercise 4.50 The ~ramb~ operator [2020-02-17 Mon 14:56] Exercise 4.51 Implementing ~permanent-set!~ [2020-02-18 Tue 22:34] Exercise 4.52 ~if-fail~ [2020-02-19 Wed 00:05] Exercise 4.53 Test evaluation [2020-02-19 Wed 00:12] Exercise 4.54 ~analyze-require~ [2020-02-19 Wed 11:26] Exercise 4.55 Simple queries [2020-02-19 Wed 17:38] Exercise 4.56 Compound queries [2020-02-19 Wed 18:04] Exercise 4.57 Custom rules [2020-02-19 Wed 21:36] Exercise 4.58 Big shot [2020-02-19 Wed 22:12] Exercise 4.59 Meetings [2020-02-19 Wed 22:57] Exercise 4.60 Pairs live near [2020-02-19 Wed 23:20] Exercise 4.61 Next-to relation [2020-02-19 Wed 23:31] Exercise 4.62 Last-pair [2020-02-20 Thu 00:19] Exercise 4.63 Genesis [2020-02-20 Thu 10:28] Figure 4.6 How the system works [2020-02-20 Thu 10:59] Exercise 4.64 Broken outranked-by [2020-02-20 Thu 12:33] Exercise 4.65 Second-degree subordinates [2020-02-20 Thu 12:50] Exercise 4.66 Ben's accumulation [2020-02-20 Thu 13:08] Exercise 4.67 Loop detector [2020-02-20 Thu 23:20] Exercise 4.68 Reverse rule [2020-02-21 Fri 15:48] Exercise 4.69 Great grandchildren [2020-02-21 Fri 17:43] Exercise 4.70 Cons-stream delays second argument [2020-02-20 Thu 17:08] Exercise 4.71 Louis' simple queries [2020-02-21 Fri 20:56] Exercise 4.72 ~interleave-stream~ [2020-02-20 Thu 17:11] Exercise 4.73 ~flatten-stream~ delays [2020-02-20 Thu 17:19] Exercise 4.74 Alyssa's streams [2020-02-21 Fri 22:00] Exercise 4.75 ~unique~ special form [2020-02-21 Fri 23:19] Exercise 4.76 Improving ~and~ [2020-02-22 Sat 18:27] Exercise 4.77 Lazy queries [2020-03-14 Sat 15:42] Exercise 4.78 Non-deterministic queries [2020-03-15 Sun 12:40] Exercise 4.79 Prolog environments [2020-05-10 Sun 17:59] Figure 5.1 Data paths for a Register Machine [2020-02-23 Sun 13:18] Figure 5.2 Controller for a GCD Machine [2020-02-22 Sat 22:27] Exercise 5.1 Register machine plot [2020-02-22 Sat 22:56] Exercise 5.2 Register machine Exercise 5.1 [2020-02-23 Sun 13:26] Exercise 5.3 Machine for ~sqrt~, Newton Method [2020-02-23 Sun 20:47] Exercise 5.4 Recursive register machines [2020-02-24 Mon 20:49] Exercise 5.5 Manual factorial and Fibonacci [2020-02-24 Mon 23:27] Exercise 5.6 Fibonacci machine extra instructions [2020-02-24 Mon 23:43] Exercise 5.7 Test the 5.4 machine on a simulator [2020-02-25 Tue 10:42] Exercise 5.8 Ambiguous labels [2020-02-25 Tue 21:58] Exercise 5.9 Prohibit (op)s on labels [2020-02-25 Tue 22:23] Exercise 5.10 Changing syntax [2020-02-25 Tue 22:39] Exercise 5.11 Save and restore [2020-02-26 Wed 13:30] Exercise 5.12 Data paths from controller [2020-02-26 Wed 23:40] Exercise 5.13 Registers from controller [2020-02-27 Thu 10:57] Exercise 5.14 Profiling [2020-02-28 Fri 20:21] Exercise 5.15 Instruction counting [2020-02-28 Fri 21:36] Exercise 5.16 Tracing execution [2020-02-28 Fri 22:59] Exercise 5.17 Printing labels [2020-02-29 Sat 17:43] Exercise 5.18 Register tracing [2020-02-29 Sat 14:07] Exercise 5.19 Breakpoints [2020-02-29 Sat 17:42] Exercise 5.20 Drawing a list ~(#1=(1 . 2) #1)~ [2020-02-29 Sat 22:15] Exercise 5.21 Register machines list operations [2020-03-01 Sun 13:03] Exercise 5.22 ~append~ and ~append!~ as machines [2020-03-01 Sun 14:11] Exercise 5.23 EC-evaluator with ~let~ and ~cond~ [2020-03-02 Mon 10:52] Exercise 5.24 Making ~cond~ a primitive [2020-03-02 Mon 14:42] Exercise 5.25 Normal-order (lazy) evaluation [2020-03-03 Tue 14:57] Exercise 5.26 Tail recursion with ~factorial~ [2020-03-03 Tue 19:38] Exercise 5.27 Stack depth for recursive factorial [2020-03-03 Tue 19:49] Exercise 5.28 Interpreters without tail recursion [2020-03-03 Tue 20:29] Exercise 5.29 Stack in tree-recursive Fibonacci [2020-03-03 Tue 20:50] Exercise 5.30 Errors [2020-03-04 Wed 11:35] Exercise 5.31 a ~preserving~ mechanism [2020-03-04 Wed 21:36] Exercise 5.32 Symbol-lookup optimization [2020-03-04 Wed 22:51] Exercise 5.33 Compiling ~factorial-alt~ [2020-03-05 Thu 16:55] Exercise 5.34 Compiling iterative factorial [2020-03-05 Thu 20:58] Exercise 5.35 Decompilation [2020-03-05 Thu 21:30] Exercise 5.36 Order of evaluation [2020-03-06 Fri 17:47] Exercise 5.37 ~preserving~ [2020-03-06 Fri 21:01] Exercise 5.38 Open code primitives [2020-03-07 Sat 18:57] Exercise 5.39 ~lexical-address-lookup~ [2020-03-07 Sat 20:41] Exercise 5.40 Compile-time environment [2020-03-08 Sun 15:02] Exercise 5.41 ~find-variable~ [2020-03-07 Sat 19:37] Exercise 5.42 Compile variable and assignment [2020-03-08 Sun 12:59] Exercise 5.43 Scanning out defines [2020-03-08 Sun 21:00] Exercise 5.44 Open code compile-time environment [2020-03-08 Sun 21:29] Exercise 5.45 Stack usage for ~factorial~ [2020-03-09 Mon 10:09] Exercise 5.46 Stack usage for ~fibonacci~ [2020-03-09 Mon 10:34] Exercise 5.47 Calling interpreted procedures [2020-03-09 Mon 11:45] Exercise 5.48 ~compile-and-run~ [2020-03-10 Tue 12:14] Exercise 5.49 ~read-compile-execute-print~ loop [2020-03-10 Tue 12:36] Exercise 5.50 Compiling the metacircular evaluator [2020-03-14 Sat 15:52] Exercise 5.51 EC-evaluator in low-level language [2020-04-13 Mon 11:45] Exercise 5.52 Making a compiler for Scheme [2020-05-06 Wed 11:09]
This section included the Emacs Lisp code used to analyse the data above. The code is directly executable in the org-mode version of the report. Interested readers reading the PDF version are advised to consult the org-mode version.
( require ' org-element) ( cl-labels ( ; lexical-defun (decorate-orgtable (tbl) (seq-concatenate 'string "(" "| Exercise | Days | Sessions | Minutes |" (char-to-string ?\n) "|- + - + - + - |" (format-orgtable tbl) ")") ) ; lexical-defun (format-orgtable (list-of-lists) (apply #'seq-concatenate (cons 'string (seq-map ( lambda (x) (format-table-line x)) list-of-lists))) ) ; lexical-defun (format-table-line (line) (seq-concatenate 'string (char-to-string ?\n) "|" (substring (car line) 0 (min 60 (seq-length (car line)))) "|" (format "%3.3f"(caddr line)) "|" (format "%3d" (nth 4 line)) "|" (format "%3.3f" (nth 6 line)) "|") ) ;; lexical-defun (get-study-sessions-data () ( save-excursion (org-babel-goto-named-src-block "study-sessions-data") (seq-map ( lambda (x) (list (org-time-string-to-seconds (substring-no-properties x 3 23)) (org-time-string-to-seconds (substring-no-properties x 26 46)) )) (seq-subseq (split-string (org-element-property :value (org-element-at-point)) "\n") 0 -1))) ) ;; lexical-defun (get-task-sequence-data () ( save-excursion (org-babel-goto-named-src-block "completion-times-data") ( let ((exercise-index 0)) (seq-mapn ( lambda (nam dat) ( setq exercise-index (+ 1 exercise-index)) (list nam dat exercise-index)) (apply #'seq-concatenate (cons 'list (seq-map-indexed ( lambda (x idx) ( if (= 0 (mod idx 2)) (list x) nil)) (seq-subseq (split-string (org-element-property :value (org-element-at-point)) "\n") 0 -1)))) (apply #'seq-concatenate (cons 'list (seq-map-indexed ( lambda (x idx) ( if (= 1 (mod idx 2)) ; (print x) (list x) nil)) (seq-subseq (split-string (org-element-property :value (org-element-at-point)) "\n") 0 -1))))))) ) ;; lexical-defun (sort-task-seq (task-seq) (seq-sort ( lambda (x y) ( if (org-time< (cadr x) (cadr y)) t nil)) task-seq) ) ;; lexical-defun (find-out-of-order-tasks (task-seq) (seq-reduce ( lambda (acc next-elem) ( if (org-time< (cadr next-elem) (cadr acc)) (list (+ 1 (car acc)) (cadr next-elem) (cons (cadddr acc) (caddr acc)) next-elem) (list (car acc) (cadr next-elem) (caddr acc) next-elem))) task-seq (list 0 "2019-08-19 Mon 09:19" (list) (list))) ) ;; lexical-defun (find-spanning-sessions-and-duration (prev-time-stamp next-time-stamp study-sessions) (seq-reduce ( lambda (acc next-session) ( let ((session-start (car next-session)) (session-end (cadr next-session))) ( cond ((<= session-end prev-time-stamp) acc) ((<= next-time-stamp session-start) acc) (t (list (+ (car acc) 1) (+ (cadr acc) ( cond (( and (<= prev-time-stamp session-start) (<= session-end next-time-stamp)) (- session-end session-start)) (( and (<= session-start prev-time-stamp) (<= prev-time-stamp session-end) (<= session-end next-time-stamp)) (- session-end prev-time-stamp)) (( and (<= prev-time-stamp session-start) (<= session-start next-time-stamp) (<= next-time-stamp session-end)) (- next-time-stamp session-start)) (( and (<= session-start prev-time-stamp) (<= next-time-stamp session-end)) (- next-time-stamp prev-time-stamp)) (t 0)))))))) study-sessions (list 0 0))) ;; lexical-defun (summarize-list (sorted-task-seq study-sessions) (cadr (seq-reduce ( lambda (acc next-elem) ( let ((prev-time-stamp (car acc)) (retval (cadr acc)) (next-time-stamp (org-time-string-to-seconds (cadr next-elem))) (exercise-name (car next-elem)) (exercise-index (caddr next-elem))) ( let ((spans-sessions (find-spanning-sessions-and-duration prev-time-stamp next-time-stamp study-sessions))) (list next-time-stamp (cons (list exercise-name :spent-time-calendar-days (/ (- next-time-stamp prev-time-stamp) (* 60 60 24)) :spans-sessions ( if (not (eq 0 (car spans-sessions))) (car spans-sessions) ( error "Fix time: %s, spans-sessions=%s" next-elem spans-sessions)) :spent-time-net-minutes (/ (cadr spans-sessions) 60) :original-index exercise-index) retval))))) sorted-task-seq (list (org-time-string-to-seconds "2019-08-19 Mon 09:19") ()))) ) (r-h (l) (seq-reverse (seq-subseq l 0))) ;; lexical-defun (make-logarithmic-histogram (astrotime-list) ( let* ((numbins (ceiling (log (+ 1.0 (seq-reduce #'max (seq-map ( lambda (x) (nth 6 x)) (r-h astrotime-list)) 0)) 2)))) (seq-reduce ( lambda (acc elem) ( let* ((hardness (nth 6 elem)) (nbin (floor (log (+ 1.0 hardness) 2)))) (aset acc nbin (+ 1 (aref acc nbin))) acc)) (r-h astrotime-list) (make-vector numbins 0))) ) ;; lexical-defun (make-linear-histogram (astrotime-list) ( let* ((numbins 32) (binsize (ceiling (/ (seq-reduce #'max (seq-map ( lambda (x) (nth 6 x)) (r-h astrotime-list)) 0) numbins )))) (seq-reduce ( lambda (acc elem) ( let* ((hardness (nth 6 elem)) (nbin (floor (/ hardness binsize)))) (aset acc nbin (+ 1 (aref acc nbin))) acc)) (r-h astrotime-list) (make-vector numbins 0))) ) ;; lexical-defun (sort-by-hardness (astrotime-list) ;; 6 is the hardness index (seq-sort ( lambda (x y) ( let* ((hardness-x (nth 6 x)) (hardness-y (nth 6 y))) ( if (< hardness-x hardness-y) t nil))) astrotime-list) ) ;; lexical-defun (sort-by-nsessions (astrotime-list) ;; 4 is the nsessions index (seq-sort ( lambda (x y) ( let* ((nses-x (nth 4 x)) (nses-y (nth 4 y))) ( if (< nses-x nses-y) t nil))) astrotime-list) ) ;; lexical-defun (sort-by-original-index (astrotime-list) ;; 8 is the original index (seq-sort ( lambda (x y) ( let* ((oidx-x (nth 8 x)) (oidx-y (nth 8 y))) ( if (< oidx-x oidx-y) t nil))) astrotime-list) ) ) ;; end cl-labels defuns ( let* ( ;; lexical-define (study-sessions (get-study-sessions-data)) ;; lexical-define (task-seq (get-task-sequence-data)) ;; lexical-define (sorted-task-seq (sort-task-seq task-seq)) ;; lexical-define (out-of-order-tasks (find-out-of-order-tasks task-seq)) ;; lexical-define (astrotime-list (summarize-list sorted-task-seq study-sessions)) ;; lexical-define (problems-sorted-by-completion-time (seq-reverse astrotime-list)) ;; lexical-define (logarithmic-histogram (make-logarithmic-histogram astrotime-list)) ;; lexical-define (linear-histogram (make-linear-histogram astrotime-list)) ;; lexical-define (problems-sorted-by-hardness (sort-by-hardness astrotime-list)) ;; lexical-define (problems-sorted-by-nsessions (sort-by-nsessions astrotime-list)) ;; lexical-define (problems-sorted-by-original-index (sort-by-original-index astrotime-list)) ) (princ (char-to-string ?\()) (pp "Amount of the out-of-order-problems: ") (princ (char-to-string ?\()) (pp (number-to-string (car out-of-order-tasks))) (princ (char-to-string ?\n)) (pp "Out-of-order problems :") (princ (char-to-string ?\n)) (pp (caddr out-of-order-tasks)) (princ (char-to-string ?\n)) (pp "Task summary (completion time):") (princ (char-to-string ?\n)) (princ (decorate-orgtable (seq-subseq problems-sorted-by-completion-time 0 3))) (princ (char-to-string ?\n)) (princ (char-to-string ?\n)) (pp "Task summary (original-index):") (princ (char-to-string ?\n)) ;; (pp (seq-subseq ;; problems-sorted-by-original-index 0 2)) (princ (decorate-orgtable (seq-subseq problems-sorted-by-original-index 0 3))) (princ (char-to-string ?\n)) ;; Hardest 10 problems (princ (char-to-string ?\n)) (pp "Hardest 10 problems (raw):") (princ (char-to-string ?\n)) ;; (pp (seq-subseq ;; problems-sorted-by-original-index 0 2)) (princ (decorate-orgtable (seq-subseq problems-sorted-by-hardness -10))) (princ (char-to-string ?\n)) ;; Hardest 10 problems (princ (char-to-string ?\n)) (pp "Hardest 10 problems (sessions):") (princ (char-to-string ?\n)) ;; (pp (seq-subseq ;; problems-sorted-by-original-index 0 2)) (princ (decorate-orgtable (seq-subseq problems-sorted-by-nsessions -10))) (princ (char-to-string ?\n)) (princ (char-to-string ?\n)) (pp "Logarithmic histogram:") ;; Make a logarithmic histogram (princ (char-to-string ?\n)) (pp logarithmic-histogram) (princ (char-to-string ?\n)) (pp "Linear histogram:") (princ (char-to-string ?\n)) ;; Make a linear histogram (pp linear-histogram) (princ (char-to-string ?\n)) (pp "Median difficulty:") (princ (char-to-string ?\n)) (pp (nth (floor (/ (seq-length problems-sorted-by-hardness) 2)) problems-sorted-by-hardness)) (pp "Median n-sessions:") (princ (char-to-string ?\n)) (pp (nth (floor (/ (seq-length problems-sorted-by-nsessions) 2)) problems-sorted-by-nsessions)) (princ (char-to-string ?\)))) ))
This document is for reading and making notes about the different foundational documents of the United States.
I have but one lamp by which my feet are guided; and that is the lamp of experience
Has Great Britain any enemy in this quarter of the world, to call for all this accumulation of navies and armies? No, sir, she has none. They are meant for us: they can be meant for no other.
Shall we try argument? Sir, we have been trying that for the last ten years. Have we anything new to offer upon the subject? Nothing. We have held the subject up in every light of which it is capable; but it has been all in vain. Shall we resort to entreaty and humble supplication? What terms shall we find which have not been already exhausted? Let us not, I beseech you, sir, deceive ourselves longer.
They tell us, sir, that we are weak — unable to cope with so formidable an adversary. But when shall we be stronger? Will it be the next week or the next year? Will it be when we are totally disarmed, and when a British guard shall be stationed in every house?
Three millions of people, armed in the holy cause of liberty, and in such a country as that which we possess, are invincible by any force which our enemy can send against us.
Besides, sir, we shall not fight our battles alone. There is a just God who presides over the destinies of nations; and who will raise up friends to fight our battles for us.
The battle, sir, is not to the strong alone; it is to the vigilant, the active, the brave.
Besides, sir, we have no election. If we were base enough to desire it, it is now too late to retire from the contest. There is no retreat but in submission and slavery! Our chains are forged. Their clanking may be heard on the plains of Boston!
Our brethren are already in the field!
Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God!
Not for man. English is tricky.
"apt to" means what?
It proper for wise men?
"disposed to" means "destined to"
?
word | translation |
---|---|
anguish | |
For my part | as to me |
to provide for it | to prepare for it |
snare | |
comport | match |
remonstrate | |
supplicate | |
slight a petition | |
fond hope | |
contend privilege | |
supinely | |
extenuate | |
gale | |
sweeps | |
resounding | |
When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.
Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes;
all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed.
it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security
To prove this, let Facts be submitted to a candid world.
He has refused his Assent to Laws
He has forbidden his Governors to pass Laws of immediate and pressing importance
He has refused to pass other Laws for the accommodation of large districts of people
He has called together legislative bodies at places unusual, uncomfortable
He has dissolved Representative Houses repeatedly
Legislative powers, incapable of Annihilation, have returned to the People at large for their exercise;
A very interesting phrase! As a Russian, I would have never thought that Legislative powers are incapable of annihilation. I would presume that they appear with the emergence of an assembly and disappear with its dissolution.
the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within.
He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners;
refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.
He has obstructed the Administration of Justice, by refusing his Assent to Laws for establishing Judiciary powers.
He has made Judges dependent on his Will alone, for the tenure of their offices, and the amount and payment of their salaries.
He has erected a multitude of New Offices, and sent hither swarms of Officers to harrass our people, and eat out their substance.
He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures.
He has affected to render the Military independent of and superior to the Civil power.
He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:
For Quartering large bodies of armed troops among us:
For protecting them, by a mock Trial, from punishment for any Murders which they should commit on the Inhabitants of these States:
For cutting off our Trade with all parts of the world:
For imposing Taxes on us without our Consent:
For depriving us in many cases, of the benefits of Trial by Jury:
For transporting us beyond Seas to be tried for pretended offences
For taking away our Charters, abolishing our most valuable Laws, and altering fundamentally the Forms of our Governments:
He has abdicated Government here, by declaring us out of his Protection and waging War against us.
He has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the lives of our people.
He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death
He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands.
He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages, whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.
In every stage of these Oppressions We have Petitioned for Redress in the most humble terms
Nor have We been wanting in attentions to our Brittish brethren.
Free and Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce
To correct their position.
Very hard to conceive such a sentence. Especially "evince a design". What is "pursuing the same Object"?
word | translation |
---|---|
hath shewn | has shown |
acquiesce | |
Absolve | |
To all to whom these Presents shall come, we, the undersigned Delegates of the States affixed to our Names send greeting.
Each state retains its sovereignty, freedom and independence, and every Power, Jurisdiction and right, which is not by this confederation expressly delegated to the United States, in Congress assembled.
The said states hereby severally enter into a firm league of friendship with each other, for their common defence, the security of their Liberties, and their mutual and general welfare
The better to secure and perpetuate mutual friendship and intercourse among the people of the different states in this union, the free inhabitants of each of these states, paupers, vagabonds and fugitives from Justice excepted, shall be entitled to all privileges and immunities of free citizens
shall have free ingress and regress to and from any other state, and shall enjoy therein all the privileges of trade and commerce, subject to the same duties, impositions and restrictions as the inhabitants thereof respectively, provided that such restrictions shall not extend so far as to prevent the removal of property imported into any state, to any other State of which the Owner is an inhabitant;
If any Person guilty of, or charged with, treason, felony, or other high misdemeanor in any state, shall flee from Justice, and be found in any of the united states, he shall upon demand of the Governor or executive power of the state from which he fled, be delivered up, and removed to the state having jurisdiction of his offence
Full faith and credit shall be given in each of these states to the records, acts and judicial proceedings of the courts and magistrates of every other state.
delegates shall be annually appointed in such manner as the legislature of each state shall direct
power reserved to each state to recall its delegates
no person shall be capable of being delegate for more than three years, in any term of six years
In determining questions in the united states, in Congress assembled, each state shall have one vote.
Freedom of speech and debate in Congress shall not be impeached or questioned in any Court, or place out of Congress, and the members of congress shall be protected in their persons from arrests and imprisonments, during the time of their going to and from, and attendance on congress, except for treason, felony, or breach of the peace.
word | translation |
---|---|
Present | верительная грамота? |
viz. | |
The Stile | The name, title? |
Word | Translation |
---|---|
accurate Symbolism, i.e. for Symbolism in which a sentence ‘means’ something quite definite.
Maurits Escher’s Work in Wood by M. Escher
This blog hasn’t had enough attention for quite a while. This is not, however, because I have abandoned it, but rather because the original purpose of this blog, that is dumping essays regarding books I read, is still valid. It’s just that the most recent book has taken an order of magnitude more time than I had expected it to take.
Okay, I’m going to write a bigger and better review on the “Structure and Interpretation of Computer Programs”, but the book altogether took so much time, effort and emotions, that even writing the review is going to take a while.
Meanwhile, one of the by-products of my reading happened to be a proposal of a feature to be included into the Scheme Language, that is needed in order to support all the code examples in the book.
(Yes-yes, you’re not misreading it. The book published in 1996 is still not covered by the existing language standard in full. Otoh, it means that there is a chance of achieving things.)
Now that the proposal has an official number, there is going to be a public discussion among the potential Language System providers, and maybe (if it passes the review), we will have this feature officially recognised.
Watching discussions of experts on what they may actually work themselves is a fascinating experience, and a chance to improve own skills too. In this case, the discussion is not expected to be too heated, however, but anyway.
https://srfi.schemers.org/srfi-203/ — this is the link to my recent proposal.
It speaks about quite an interesting approach to generating computer images, that builds on top of the classical features present in most drawing languages, such as PostScript, TikZ, MetaPost or SVG. While the expressive power is largely the same, the degree of abstractness is greater, which leaves gives greater code reusability and flexibility.
Scheme is by not means the only language that has a community feature review process.
Python has Python Enhancement Proposals (PEP) Java has Java Community Process (JCP) Scheme has Scheme Requests For Implementation
The image in the header is a digital copy of a work of M. Escher, whose works have inspired the original author of the “Picture Language” Peter Henderson. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.137.1503
Telegram: http://t.me/unobvious Facebook: http://facebook.com/vladimir.nikishkin
]]>This post is a not-so-technical introduction to the Scheme Request for Implementation 216: SICP Prerequisites, that I have written and made available for discussion. (Please, contribute!)
SICP stands for the "Structure and Interpretation of Computer Programs", a world-famous introductory programming textbook.
It's aim is to make the exercises and examples from the book be available for any Scheme system bothering to provide it, and not just MIT/GNU Scheme and Racket (which doesn't even consider itself a Scheme any more). Before this SRFI an issue tracker request asking for SICP support would have been looking vaguely. Now you can just write "Could you consider providing SRFI-216 (and 203)" in your implementation.
In order to write this SRFI, I went through the whole book and solved all the exercises. However, my experience is just mine, and to make a truly good common vocabulary, community feedback is required.
For technical detail and more background, I am inviting you to read the whole article.
SICP used to be an introductory course in programming in the Massachusetts Institute of Technology.
Its aims are twofold.
On one hand, it aims to provide sort of a bridge between the level of programming at which programmers reason about basic computational units, such as half-adders and XOR gates, and the level at we can reason about everyday things, such as "please highlight all misspelled words".
On the other hand it tries to expose the students to as many software design/architectural patterns (that are actually thought out to add value rather than overcome the limitations of a language) as possible.
Both of these goals are enormous, and the course is itself enormous in size.
Still, passing it gives that (false) feeling of almightiness that is just so seductive that trying it once, the student is almost bound to pursuing it later on.
And apart from the sheer difficulty of the course, there are technical problems too.
To keep the long story short, the Scheme world is not a place for the weak.
SICP challenges the student extra hard in both the questions it asks, and in the questions it does not ask. In particular, it speaks almost nothing about any particular implementation of Scheme, apart from a few words about MIT/GNU Scheme. Recalling that MIT Scheme was initially conceived as a software simulator for a hardware chip will already give you a sketch of an emotion that a serious student tackling SICP is experiencing all the time.
Implementations matter a lot for a language as high-level as Scheme. In C you can get by with a lot of things, because the language itself has direct memory access, so you are allowed to bend the rules for your own benefit pretty much in an unlimited fashion.
However, Scheme is a high level language, and the developers paid a lot of effort to make sure that it remains as abstractly formulated, as possible, to avoid "designing themselves in the corner" from which it would be very hard to escape.
But a side-effect of this is that a lot of features that modern programmers take for granted, such as random numbers or time and date query are unavailable to pure Scheme programmers.
SICP uses several of those, as its aim as a programming language textbook is to introduce the student to as many programming concepts, as possible, even if they have not yet been as cleanly and abstractly formulated, as is desirable to be included into the Scheme standard.
One of the main benefits of the SICP is that it teaches the readers how to build an "artificial intelligence system" (in this case, a high-level programming system) on almost any Turing-complete substrate.
But the more frustrating then is to realise that completing it entirely is only possible on two systems, one of which is highly peculiar and does not support Windows (MIT, officially), and the other one doesn't even call itself Scheme.
Moreover, nowadays the main strength of the SICP is not the strength of a general-purpose language (even though writing standalone programs is still completely possible, and Cisco still maintains its own implementation), but as an extension language that can be built into almost any software product imaginable, written in any language.
There are Schemes working on top of JVM, CLR, Fortran, JavaScript. Scheme is an extension language for GNU Debugger, GNU GIMP and GNU Guix.
For an interested programmer, it's the most reasonable to choose the programming system for mastering SICP that is most like to fit well into the daily workflow.
This is one of the main goals of this SRFI.
Since may years has passed since the release of SICP, Second Ed., the language has evolved, and several widely accepted libraries have emerged, that now make it possible to pass the course without digging into the gory details of interpreter implementation, as was required years ago, if you wanted to use any Scheme other than MIT/GNU.
Some of the features still remain a no man's land in the sense that no overwhelmingly acceptable abstraction has emerged to be included into the Scheme. The most noticeable missing jigsaw bit is the graphics output. Frankly speaking, in the age of extra powerful HTML, it is unlikely that it will ever emerge.
The author of this text, therefore, had to deal with the graphics separately, as described in https://lockywolf.wordpress.com/2020/07/18/proposing-programming-language-features/ and the SRFI-203.
This work, however, speaks about the features that either have been included into the standard, of have widely accepted abstractions.
The result of my work is a Scheme Request For Implementation, a non-normative Scheme community standard.
It includes two constants, five functions, and one syntactic extension to the core language.
Many people would remember that it took quite a while until the C language got a dedicated logical type.
Well, for Scheme it took even longer, and therefore SICP has to refer to the two variables true
and false
, that are an abstraction over some (unknown) logical types.
Now that Scheme standard has #t
and #f
, it makes a lot of sense to bridge this gap by equating SICP values and standard values.
Scheme hasn't had a standard function to find current time for quite a while.
Now it does have one, it is possible to implement the most basic tool for performance profiling (timer) portably.
This way the runtime
function was implemented.
How would you implement random numbers on a machine that has not access to a clock, an entropy source, or a decaying atom? It's actually a non-trivial question! Therefore, the Scheme standard still does not have a way to generate (pseudo-)random numbers.
Luckily, the need in random numbers is so large that a widely accepted abstraction has appeared ages ago, and for me it was necessary to just repackage the code.
This way the random
procedure was born.
Multi-threading was a hot topic about ten years ago, when I was still at a university. It has not yet been fully researched, deadlocks and race conditions still arise quite a lot, despite several languages that claim to be very fit for multi-threaded programming being available on the market.
The two most basic tools that are required for properly understanding multi-threading are something that "allows programs to run in parallel" and something that "allows one program to make sure that it is not interfering with the some other one".
Of course, there are numerous other tools, but the parallel-execute
, and test-and-set!
(also known as atomic-compare-and-swap
) are the basic foundation on which everything else can be built.
I implemented them on top of the "SRFI-18", which itself borrows inspiration from Java. It is funny to implement mutexes on top of other mutexes, but that's the price of two models not being completely corresponding.
Streams are infinite lists. They would not have appeared here at all, if SICP had spoken at least a tiny bit about native mechanisms for syntactic extension.
Alas! SICP was written in the years when Scheme had so-called "non-hygienic" macros, but still says nothing about them.
The most useful cons-stream
syntactic structure is, therefore, impossible to implement without writing your own Scheme, which by the Chapter 3 is still not possible.
I had, therefore, implement it myself, on top of the last (R7RS) standard's syntax-rules
.
Firstly, no good standards appear unless a lot of people examine them. Code and document review are extremely needed.
Secondly, if you have a "favourite" Scheme implementation, you could try lobbying the provision of this SRFI, in order to both make the implementation more attractive to the users, and to promote SICP.
Tell your friends, students, professors, and enthusiasts, that studying SICP does not have to be a process full of pain.
If you are teaching programming, functional programming, or informatics (or your friends do), suggest that they have a look at the proposal, their feedback would be the most useful.
The SRFI-216, and the SRFI-203, combined, provide all the features that are required from a Scheme implementation to host the book.
I'm hoping that this work may extend the period of relevance for Scheme, SICP, and help popularise such an in-depth and unorthodox course.
The discussion is done through the mailing list. For details, please, consult https://srfi.schemers.org/srfi-216/.
This post was not funded by anyone, so if you liked it and/or find it useful, please consider donating or subscribing.
Donate
PayPal
https://paypal.me/independentresearchLiberaPay
https://liberapay.com/independentresearch/donatePatreon
https://www.patreon.com/bePatron?u=29080977Subscribe
Web 1.0
https://lockywolf.netFacebook
http://facebook.com/vladimir.nikishkinTelegram
http://t.me/unobviousGitLab
http://gitlab.com/lockywolfTwitter
https://twitter.com/VANikishkinСтатья опубликованна на ресурсе Хабрахабр.
TL;DR: Я написал и выложил на всеобщее обсуждение Scheme Request for Implementation 216. Он нацелен на то, чтобы одна из самых известных в мире учебных программ по Computer Science, Structure and Interpretation of Computer Programs, стала выполнимой в полном объёме не только на MIT/GNU Scheme, но и на других интерпретаторах и компиляторах, в частности, на вашем любимом. И если раньше запрос в багтрекер "сделайте, пожалуйста, поддержку SICP" звучал бы расплывчато, то после принятия данного SRFI, поддержка SICP должна стать намного более общепринятой.
Чтобы написать этот документ, я проработал SICP целиком, выделил части, до сих пор не вошедшие в стандарт, и сформулировал их в качестве двух документов, SRFI-203 (принят в сентябе 2020), и данного, SRFI-216, к которому я и приглашаю всех присоединиться.
За техническими деталями и подробностями, прошу под кат.
(Structure and Interpretation of Computer Programs) Это одна из самых известных учебных программ по "общему программированию", ранее преподаваемая в Массачусеттском Технологическом Институте (MIT), в качестве вступительной, а ныне перенесённая на старшие курсы из-за гигантского объёма и глубины, которая, как считается более программисту не требуется. Курс проводит студента от однострочной программы, которая складывает два числа, до написания собственной реализации Scheme, включающей компилятор и интерпретатор.
Первое издание было выпущено в 80е годы, второе вышло в 1996 году. Существует русский перевод. Она была одной из первых книг, к которым стал прилагаться веб-сайт. ( Который работает до сих пор.)
Эта программа, как будто бы, ставит перед собой две цели. Одна – это построить мостик через пропасть абстракции, зияющую между "вычислениями на базовых вычислительных элементах", таких, как сумматор и ячейка памяти, и высокоуровневыми абстракциями вида "если слово отсутствует в словаре, подчеркни его красным". Вторая – это познакомить программиста с наиболее важными программными архитектурами, разработанных инженерами за много лет.
За исключением двух программных систем, (MIT/GNU Scheme и Racket, из которых только одна (MIT) является Scheme-системой в полном смысле этого слова) SICP непроходима на большинстве Схем, которые встречаются в живой природе.
Вернее, для человека, который умудрился пройти в SICP достаточно далеко, слово "непроходима" является скорее вызовом, чем препятствием, однако задачи, требующие отсутствующего функционала, встречаются в программе раньше, чем приходит состояние всемогущества.
Одно из главных достоинств SICP – это то, что он рассказывает, как построить "систему искусственного интеллекта" (в данном случае под ней понимается язык программирования высокого уровня) на практически любом Тьюринг-полном субстрате. Но тем более обидно осознавать, что проработать её в полной мере можно исключительно на двух программных системах, одна из которых не поддерживает Windows (MIT, по крайней мере, официально), а вторая вообще заявляет, что не является Scheme.
К тому же, основная сила Scheme в наши дни – это не сила языка общего назначения (хотя писать программы общего назначения тоже получаются отличные, а компания Cisco до сих пор поддерживает собственную реализацию), а возможность встраивания его как языка расширения в практически любой программный продукт, написанный на любом языке. Есть Схемы, работающие на JVM, CLR, Fortran, JavaScript. Схема является языком написания расширений расширения таких проектов как GNU Debugger, GNU GIMP и GNU Guix.
Для заинтересовавшегося программиста логичнее осваивать SICP на той Scheme, которая лучше всего встраивается в ту инфраструктуру, к которой он привык.
На реализацию этой цели и направлен данный SRFI.
Поскольку автор сих строк всё-таки приобрёл (ложное) ощущение всемогущества, он решил поставить пару бетонных опор для того мостика, о котором говорилось несколькими абзацами выше. Конкретно это выразилось в написании документа Scheme Request For Implementation, под номером 216, в котором собран список требований, которым должен удовлетворять интерпретатор Scheme для того, чтобы на нём запускался полный набор примеров программного кода из SICP.
Конечно, сам факт наличия документа ещё ничего не гарантирует, необходимо, чтобы функционал был реализован в программных системах, однако документ сопровождается "возможной реализацией", которая работает как минимум на одной программной системе, отсутствующей в списке выше (Chibi-Scheme).
Функционал, требуемый для прохождение SICP, но не общепринятый.
Предлагается функция (random x)
, которая генерирует случайное целое число меньше заданного.
В связи с тем, что Схема спроектированна так, чтобы работать в том числе на CPU, не имеющих ни доступа к часам, ни источника энтропии, средства для работы со случайными числами не входят в стандарт R7RS-small. (Но будут входить в -large, вероятно.)
Предлагается функция (runtime)
, возвращающая значение текущего времени.
По причинам, эквивалентным указанным выше, функционал для работы с датой также не входила в базовый стандарт в течение долгого времени, а когда был принят, имена функций не совпадали с таковыми из 1996 года.
В связи с тем, что Схема очень старый язык, работа с логическими выражениями была в разных реализациях осуществлена по-разному.
Например, в каких-то реализациях символ #f
, существует, а в каких-то нет.
Также, во некоторых системах, по традиции LISP, пустой список также является "ложным" значением.
Для большей абстракции, таким образом, SICP нигде не использует ложное выражение само по себе, а пользуется переменными/константами true
и false
, которые гарантированно имеют, соответсвенно, верное значение, и ложное значение.
Эти две константы также реализуются в данном документе.
Вопрос многопоточного программирования был крайне популярен в те годы, когда выпускался из института (десять лет назад), и хотя некоторый прогресс заметен, до сих пор нельзя сказать, что какая-то конкретная модель мультитрединга победила все остальные.
Тем не менее, у любой многопоточной модели есть два базовых компонента, без которых она не может существовать: параллельное выполнение задач и упорядочивание входа в критические места.
SICP, соответственно, требует существование двух примитивов, parallel-execute
и test-and-set!
, которые ровно эти две концепции и призваны прояснить.
Сама же по себе многопоточная модель Scheme сходна с таковой в Java.
"Стримы" – это бесконечные структуры данных, схожие с генераторами/итераторами в языке Python (или использовавшимся ранее xrange
), только несравнимо более гибкие.
В принципе, они реализуемы в текущем стандарте, и не должны бы попадать в данный документ, однако, реализация базового примитива cons-stream
, не может быть функцией, потому что не вычисляет свой второй аргумент.
Про встроенные механизмы же синтаксического расширения SICP не говорит ничего.
Соответственно, эта структура также реализуется в данном proposal.
Работа с графикой не затрагивается в данном документе. Возможные примитивы опубликованы в SRFI-203.
Во-первых, существование качественных стандартов невозможно без изучения их большим количеством людей. Очень нужны ревью предложения и кода.
Если у вас уже есть любимая реализация Scheme, поинтересуйтесь у людей, которые за неё отвечают, возможна ли реализация данного стандарта в вашей любимой системе.
Расскажите своим друзьям, студентам и энтузиастам, о том, что учиться по SICP не обязательно должно быть процессом, полным боли.
Если вы преподаёте программирование, или функциональное программирование, или у вас есть знакомые преподаватели, предложите им взглянуть на предлагаемое расширение, их фидбек будет в высшей степени ценным.
Ну, и надо сказать, что я просто считаю Схему отличным языком. Пользуйтесь Схемой, это можно делать на громадном количестве платформ, включая Plan9, Android, WebAssembly, и встраивать в другие программы.
Если вам показался этот пост полезным, на мои заметки можно подписаться, или задонатить без подписки:
mount
output.# initramfs rootfs on / type rootfs (ro,seclabel) # devfs tmpfs on /dev type tmpfs (rw,seclabel,nosuid,relatime,size=3933144k,nr_inodes=983286,mode=755) # pts devpts on /dev/pts type devpts (rw,seclabel,relatime,mode=600) # cgroups none on /dev/stune type cgroup (rw,nosuid,nodev,noexec,relatime,schedtune) none on /dev/cpuctl type cgroup (rw,nosuid,nodev,noexec,relatime,cpu) none on /dev/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,noprefix,release_agent=/sbin/cpuset_release_agent) # cgroup2 cg2_bpf on /dev/cg2_bpf type cgroup2 (rw,nosuid,nodev,noexec,relatime) # some unknown USB shit adb on /dev/usb-ffs/adb type functionfs (rw,relatime) # proc proc on /proc type proc (rw,relatime,gid=3009,hidepid=2) # sys sysfs on /sys type sysfs (rw,seclabel,relatime) # wtf? selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime) # https://en.wikipedia.org/wiki/Debugfs debugfs on /sys/kernel/debug type debugfs (rw,seclabel,relatime) # https://www.phoronix.com/scan.php?page=news_item&px=TraceFS-Linux-Tracing-FS tracefs on /sys/kernel/debug/tracing type tracefs (rw,seclabel,relatime) # https://lwn.net/Articles/421297/ # https://www.fsl.cs.stonybrook.edu/docs/tracefs-fast04/index.html pstore on /sys/fs/pstore type pstore (rw,seclabel,nosuid,nodev,noexec,relatime) # cgroups none on /sys/fs/cgroup type tmpfs (rw,seclabel,relatime,size=3933144k,nr_inodes=983286,mode=750,gid=1000) # cgroups none on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer) # ? tmpfs on /mnt type tmpfs (rw,seclabel,nosuid,nodev,noexec,relatime,size=3933144k,nr_inodes=983286,mode=755,gid=1000) # Vendor partition /dev/block/bootdevice/by-name/persist on /mnt/vendor/persist type ext4 (rw,seclabel,nosuid,nodev,noatime,data=ordered) /data/media on /mnt/runtime/default/emulated type sdcardfs (rw,nosuid,nodev,noexec,noatime,fsuid=1023,fsgid=1023,gid=1015,multiuser,mask=6,derive_gid,default_normal,reserved=100MB) /data/media on /mnt/runtime/read/emulated type sdcardfs (rw,nosuid,nodev,noexec,noatime,fsuid=1023,fsgid=1023,gid=9997,multiuser,mask=23,derive_gid,default_normal,reserved=100MB) /data/media on /mnt/runtime/write/emulated type sdcardfs (rw,nosuid,nodev,noexec,noatime,fsuid=1023,fsgid=1023,gid=9997,multiuser,mask=7,derive_gid,default_normal,reserved=100MB) /dev/block/platform/soc/1da4000.ufshc/by-name/system on /system type ext4 (ro,seclabel,relatime,block_validity,discard,delalloc,barrier,user_xattr) /dev/block/platform/soc/1da4000.ufshc/by-name/vendor on /vendor type ext4 (ro,seclabel,relatime,block_validity,discard,delalloc,barrier,user_xattr) /dev/block/bootdevice/by-name/modem on /vendor/firmware_mnt type vfat (ro,context=u:object_r:firmware_file:s0,relatime,gid=1000,fmask=0337,dmask=0227,codepage=437,iocharset=iso8859-1,shortname=lower,errors=remount-ro) /dev/block/bootdevice/by-name/bluetooth on /vendor/bt_firmware type vfat (ro,context=u:object_r:bt_firmware_file:s0,relatime,uid=1002,gid=3002,fmask=0337,dmask=0227,codepage=437,iocharset=iso8859-1,shortname=lower,errors=remount-ro) /dev/block/bootdevice/by-name/dsp on /vendor/dsp type ext4 (ro,seclabel,nosuid,nodev,relatime,data=ordered) none on /acct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct) none on /config type configfs (rw,nosuid,nodev,noexec,relatime) /dev/block/bootdevice/by-name/userdata on /data type ext4 (rw,seclabel,nosuid,nodev,noatime,discard,noauto_da_alloc,data=ordered) /dev/block/bootdevice/by-name/cache on /cache type ext4 (rw,seclabel,nosuid,nodev,noatime,data=ordered) tmpfs on /sbin type tmpfs (rw,seclabel,relatime,size=3933144k,nr_inodes=983286,mode=755) /sbin/.magisk/block/persist on /sbin/.magisk/mirror/persist type ext4 (rw,seclabel,relatime,data=ordered) rootfs on /sbin/charger type rootfs (ro,seclabel) rootfs on /sbin/charger_log type rootfs (ro,seclabel) /sbin/.magisk/block/system on /sbin/.magisk/mirror/system type ext4 (ro,seclabel,relatime,block_validity,discard,delalloc,barrier,user_xattr) /sbin/.magisk/block/vendor on /sbin/.magisk/mirror/vendor type ext4 (ro,seclabel,relatime,block_validity,discard,delalloc,barrier,user_xattr) /sbin/.magisk/block/data on /sbin/.magisk/mirror/data type ext4 (rw,seclabel,relatime,discard,noauto_da_alloc,data=ordered) /sbin/.magisk/block/data on /sbin/.magisk/modules type ext4 (rw,seclabel,relatime,discard,noauto_da_alloc,data=ordered) tmpfs on /storage type tmpfs (rw,seclabel,nosuid,nodev,noexec,relatime,size=3933144k,nr_inodes=983286,mode=755,gid=1000) /data/media on /storage/emulated type sdcardfs (rw,nosuid,nodev,noexec,noatime,fsuid=1023,fsgid=1023,gid=9997,multiuser,mask=7,derive_gid,default_normal,reserved=100MB) tmpfs on /storage/self type tmpfs (rw,seclabel,nosuid,nodev,noexec,relatime,size=3933144k,nr_inodes=983286,mode=755,gid=1000)]]>
I have read the “Philosopher’s Madness” by LiShan Chan. It is a book about mental illness, British Education, academic careers, their successes and failures, Overseas Chinese, Singapore and Dubai, writing and reading.
This review is not a member of the technological book reviews series.
If you are still interested, welcome under the cut.
For an adult it is a little hard to find time to do reading. This is especially true for over-obsessed with work Asian peoples, such as the Russians and the Singaporeans. Maybe that’s why the books that I found for sale in Singaporean bookstores are predominantly on the short side?
I have been wanting to try out some Eastern Asian literature for quite a while, and the slight problem I have had with Shanghainese literature is that it is predominantly in Chinese. I believe I will get to it eventually, but too impatient to wait long enough, I decided to try out Singaporean literature first. Even though Singapore is, perhaps, more famous for its poetic culture, rather than for prosaic, I ended up buying two non-fiction “Experience Reports”, one of which happened to be Ms. Chan’s.
Buying a book turned out to be surprisingly easy, the bookshop mailed it to me the same day, and I received it two days later. I can’t over-praise myself, as I started reading almost the same week. A big achievement for a person who had been spending almost all of his life at a computer screen for the past ten years. (I still use a lot of paper literature though, for technical reading. I haven’t read much paper-based leisure material for quite a while.)
The book is short, an avid reader can probably devour it in a few hours. I managed to spread it over a few evening, partialy trying to use it as an eye relaxant to make it easier to fall asleep. (A lifehack of surprising utility in the age of LED screens.)
The book tries to be a faithful representation of the author’s experience of a schizophrenic psychotic episode, starting from describing the sequence of events that immediately precided it, and completing the book with a few reflections on the matter of being mentally ill itself The book features a list of references, which is not a common thing for a personal report, but I guess this is due to Ms. Chan’s previous academic experience.
In short, I can’t say that this is a great book that everyone should read. The language style is nothing out of ordinary. It is a nicely readable English that does feel a little bit Chinese, which is, I guess, expected from an author who grew up in a quadrilingual society. The narrative is smooth, it doesn’t wander in loops, follows a rather sequential and straightforward plot.
The story of a mental illness is probably worth showing to someone suffering from (or suspecting in himself) a mental illness, as it is a very down to Earth in this particular parts. The narrative literally speaks of “what I felt” and “what I did”. To me at least, it felt very natural and uncontrived. What the author is saying, that is that today there is a misconception in the society. It seems that people already understand the fact that a mental illness is not a death sentence. People also seem to know that mentally ill people are harmless, and certainly know that their is not caused by some mystical phenomenon.
However, people seem to still not know what to do with the situations when people they know seem to suffer from an illness that looks possible to control, but it is unclear how to approach them. Many people respond well to treatments and get better eventually. Moreover, in most cases mentally ill people are harmless and if they do make harm, most often they harm themselves. This is all not new, but having yet another presentation of the same ideas, done through the eyes of someone who felt it herself and supported by cases from a personal experience, may be more persuasive.
What really bought me though, was not the part regarding the mental illness, but rather the sideline, speaking of an aspiring young scholar following the philosophical path. To be honest, I didn’t see any philosophical originality in what Ms. Chan was writing. Moreover, I could clearly see this not so uncommon vision of a person following a clearly outlined path. In Ms. Chan’s case this was a path that partly coincided with mine: going to Eastern Asia after completing a British Higher Education degree. This whole “Academic Path” makes you feel that life in general consists of jumping from an institution to an institution, prepared for you by some other people whose expertise is in making life meaningful for other people. This problem, I believe, is by no means specific to Britain or Sinosphere, but the story of an aspiring philosopher makes it seem more pronounced.
That is, to me at least, philosophers are the most prominent of those people who are to the least extent expected to follow a well-written path. Well, some of the recent philosophers did have a degree in philosophy, for example Jacques Derrida, however I always found the greatest value in the works of philosophers by doom rather than philosophers by profession. Within the lines of the book I do see quite a lot of this contradiction of the explicit and the implicit, the connotation and the denotation, and the desire to be good and the desire to be successful.
I also have:
I have read “A Light That Failed”. This review is not a part of the series of reviews on technological books. Nevertheless, in order for a brain to keep moving, some humanitarian reading is advised.
This book was a part of a book club in discussion in Shanghai. Speaking briefly, it discusses the new (2019) tendency in politics in which the western politicians start using the rhetoric that they have not been using so far; the rhetoric that is not unlike the one used by the authoritarian politicians. Surprisingly, the book is known among readers in China, it has a rating on DouBan. The book is not related (at least, directly) to the eponymous novel by Rudyard Kipling. The book is not related (at least, directly) to the eponymous film of 1939, based on the work by Rudyard Kipling.
This book looks like a scientific book. It tries to be serious. The book structure is well defined, the chapters have a clearly defined scope, the narrative looks as if it is trying to make a self-consistent argument in favour of a certain thesis, in such a way to establish this thesis’ truthiness. What is certainly typical for scientific books, it is accompanied by an extensive bibliography.
On the other hand, even though it presents a very carefully crafted image of a scientific book, certain tiny wrinkles on the seemingly impeccable surface of the outside image. Why are the terms the authors are using are not that clearly defined in right in the beginning? Why does the reference list include so many links to newspapers and other publicist material?
Well, maybe it is not actually a scientific book?
My friend told me that this is written in just the way the U.S. discusses the society. Hmm… too bad for science, too good for the U.S.A., I guess. Perhaps I would appreciate if Russian books about societal matters were written with this care.
The book largely follows the standard left-wing (but not ultra left-wing) narrative of the generalised West. This alone would have been enough to not consider the book altogether. However, the attention to detail, combined with a uniquely broad coverage of the various sources, especially a one of a kind attention to the Central European view of what is going on, still attracted my attention.
The book speaks about the essential tiredness of the peoples all over the world, from Russia, through Central Europe, especially Hungary, to the USA, with it’s already not that freshly inaugurated president Donald Trump, tiredness of the traditional way of speaking about the world’s political path.
There is a lot spoken about the age of imitation. Contrary to the traditional perception, the imitation by itself is not described as something intrinsically bad. It is noted, however, that imitation is not traditionally seen as something worth of being proud of either.
This very much reminds me of the typical Chinese attitude that I hear way too often, and even more from people doing “Chinese copies” of stuff. “We are first and foremost learning, and copies are a nice by-product that makes us rich.” A similar attitude is explored in the Central Europe, where people were also forced by the laws of history to imitate the Western models, and too often superficially. Similarly, Russia is shown to behave in an imitative way, only imitating (badly) certain less appreciated elements of the Western behaviour.
The imitation is one of the central themes in the book, even though it didn’t deserve a place in the title. The other pervasive concept (that is even less well defined than imitation) is “liberalism”. Liberalism is not actually defined anywhere in the book, and what the authors imply by “liberalism” seems to be quite conflicting with what I would call liberalism. This disparity is even more exacerbated by the fact that the authors are forced (by the object of study) to consider the Central European definition of “liberalism” that is “anti-communism”, and somehow reconcile it with the Western European one, which has been recently appropriated by the “left”.
I won’t expose too much of this inevitable conflict in such a short review, and the curious reader is invited to visit the book itself.
What can I make out of the book as a reader?
The authors claim that the world got tired of “liberalism” and now wants freedom from it. I disagree. The rhetoric that various politicians all over the world are increasingly using, may be discomfiting to the “establishment liberals”, but the reason why it is used lies not within the literal meaning of the words being pronounced, but rather owes to the fact that the “establishment liberals”, who are so well versed in the liberal jargon, are in fact representing a more authoritarian standpoint that the supposed “illiberals”. The liberalism itself, as a concept of “live and let live” will probably never get old, just as it, probably, has never been totally alien to people. It is always worth looking at the essence of things rather than their superficial cover.
This “live and let live”, however, raises an interesting question of “who” should live, and let live “who”? The idea of “illiberal democracy” (I am quoting the book for a lack of a better term) can be roughly summarised as “the easiest way to protest is to go away”. Why would I be fighting a dictator, when I can just move to another country? In China this may even turn out to be like “why would I be criticising my province head, when I can move to another province?”.
And if taking such an idea to the extreme, you would end up in the world that is incredibly intolerant to each other’s views. Peculiarly, this world will be also very visibly diverse, as people would mix a lot and have a lot of unlike superficial features. However, inside of it you will find less diversity, not more, because “really different” people would move away. It will thus be the world that is both free and intolerant at the same time. It will be superficially diverse and homogeneous inside the core.
Imitation is the essence of life. The first version of this review has “human life” in the first sentience, but in fact it lies in the core of all life whatsoever. Life is the way that protein bodies replicate. Information transmission not exist without a medium, and a medium is a thing that copies information from one storage to another. Each time something is heard, seen, felt, smelled or tasted, there is a copy of it made. Moreover, there is even nothing that can reliably differentiate between a real object and a synthetic signal to a “brain in a jar”, and we all are essentially “brains in jars” of our own skulls.
It is needless to repeat that repetition is the mother of learning. (A Russian proverb.) Repetition is imitation, imitation is copying. The West seems to astonishingly unwilling to accept the fact that information is not a physical object. Immeasurable effort has been spent onto trying to turn information into canned digital artefacts that behave like physical objects, and all of it is wonderfully faulty. Trying to restrict imitation of anything practical while urging the people around you to imitate you superficially is a perfect recipe for a disaster. And part of an image of this disaster is so carefully carved on the pages of the book.
The book does not give a definitive answer on what sense to make out of the new prevalence of imitation in the modern world. Still, as a survey of imitative political endeavours the book is quite comprehensive.
The book even speaks a bit about China, but judging from the fact that I cannot remember anything from that chapter, their discussion wasn’t very compelling.
If I have to summarise my impression of the book contents in few words: the authors are wrong in that the Light of Liberalism has failed, but Liberalism of the new century will not be very similar to the liberalism that was a baby of the Century of Two World Wars.
It was an entertaining reading. I learned a lot of new English words, and reinforced my habit of reading books with a pen. Will this book be remembered? Unlikely. I think, it will be forgotten withing a couple of years. Is it fun? I’d say, it beats Twitter. Is it telling the truth? No, but if you are interested in such kind of literature, you would be filtering out meaningless water automatically.
I also have:
English | Russian | |
---|---|---|
1 | reappraisal | переоценка |
2 | stride | шаг, интервал |
3 | repudiated | отречься |
4 | heady | опьяняющий |
5 | a raft of | пачка чего-то |
6 | a prong | зубец |
7 | epitome | олицетворение |
8 | cachet | капсула |
9 | onus | бремя |
10 | cogency | неоспоримость |
11 | subliminal | подсознательный |
12 | flagrantly | ужасно, возмутительно |
13 | contestation | оспаривание |
14 | contention | раздор |
15 | afflict | причинять боль, проблемы |
16 | prelapsarian | до Падения Человека |
17 | harrowing | душераздирание (букв. боронование) |
18 | precarious | ненадёжный |
19 | squabble | перебранка |
20 | be riven | быть расщеплённым |
21 | propinquity | сродство |
22 | quell | подавлять |
23 | contention | раздор |
24 | eschew | избегать, сторониться |
25 | rashness | опрометчивость |
26 | assail | атаковать |
27 | evince | выявить наличие |
28 | posit | постулировать |
29 | verbal garb | словесное обрамление |
30 | relinquish | уступить |
31 | bide your time | выжидать |
32 | kilter | исправность |
33 | sniffily | (перен.) высокомерно |
34 | wherewithal | необходимые средства |
35 | fount | источник, ключ (водяной) |
36 | acme | кульминация |
37 | brunt | главный удар |
38 | inextricable | безвыходный от запутанности |
39 | tussle | схватка (также мед. схватки) |
40 | hurl | швырять |
41 | riven by | раздираемый |
42 | caesura | цезура |
43 | quandary | затруднительное положение |
44 | larceny | воровство |
45 | pilfering | мелкое воровство |
46 | tutelage | опекунство |
47 | to be felled | быть срубленным |
48 | malfeasance | злодеяние |
49 | awry | косо |
50 | toe the line | подчиняться требованиям |
51 | to curb | взнуздать (метаф.) |
52 | consternation | оцепенение от испуга |
53 | buck the trend | сломать тренд |
54 | afflict | подействовать негативно |
55 | nag away | ворчать |
56 | venally | имеющи склонность к мздоимству |
57 | mainstay | оплот |
58 | abnegation | отречение |
59 | vacillate | колебаться |
60 | impudence | наглость |
61 | bequeathed | завещать |
62 | dunce (hat) | “шляпа дурака”, наказание в английских школах |
63 | searing | жгучий |
64 | harrowing | боронование (мет. душераздирающе) |
65 | intricate | запутанный |
66 | scuttled | затопленный путём открытия люка (scuttle) |
67 | gnawing | грызущий |
68 | dishevelled | взъерошенный |
69 | emaciated | истощённый |
70 | scoffing | саркастический |
71 | incensed | восхваляемый (ему возжигают фимиам) |
72 | indictment | обвинительный акт |
73 | trenchant | язвительный |
74 | insolence | наглость |
75 | to vent | испускать |
76 | venality | продажность |
77 | prurient | похотливый |
78 | lurid | пылающий, сенсационный |
79 | fungibility | взаимозаменяемость |
80 | swirl | кружение |
81 | sheen | блеск |
82 | strenuously | напрягшись |
83 | consternation | испуг |
84 | coruscating | сверкающий |
85 | ferreting | охота с хорьком |
86 | benighted | застигнутый ночью |
87 | predicament | затруднительное положение |
88 | pitting | заставить соревноваться |
89 | volte-face | поворот кругом (фиг.) |
90 | felled | срубленный |
91 | languish | изнывать |
92 | hinterland | глубокий тыл |
93 | gnomic | гномический |
94 | vacillation | колебание (втч мнений) |
95 | stint | ограниченный набор (работы, финансирования) |
96 | complacent | самодовольный |
97 | variegated | разносторонний |
98 | snuff out | затушить |
99 | revile | поносить, ругать |
100 | falter | дрогнуть, запинаться |
101 | portends | предвещать |
102 | gleaned | подчёрпнутый |
103 | circumscribed | чётко ограниченный |
104 | tepid | тёпленький |
105 | precinct | округ, участок |
106 | ruminate | раздумывать |
107 | moribund | умирающий, заброшенный |
108 | curt | грубо краткий |
109 | purblind | недальновидный |
110 | ferret | разнюхивать |
111 | wryly | косо (о взгляде) |
112 | risible | смешливый |
Is that true nowadays? What about Navalny’s?
The Divine Right of Kings?
Vietnamese? What about Korea, Japan? Triads? Yakuza?
Shall the party be obliged to deliver it’s initially declared objectives?
What about all of us? Don’t we all feel way more confident about things we do not want?
What about the old constitution of ~1950?
Is it “Chinese-ish” English?
China has a very fertile land. People seem to be growing stuff everywhere easily. Is this a fundamental difference?
What about everyone else? Aren’t we all getting into the era of imitated nations? Just because everyone is ready to pay for what they “want to see”?
Isn’t this exactly how things worked in the USSR? Many people admit that the CPSU effectively worked as a religious organisation, when the Marxist dogmas happened to under-perform.
Furthermore, doesn’t any totalitarian entity work like this? The Annenerbe, the FaLunGong, etc…?
Didn’t the “Deep America” elect Trump after all? Doesn’t every country have a “Deep Country”? Is there a “Deep Russia”?
People in the USSR also didn’t believe in anything they were speaking of. And were equally content.
What about Hofstede and cultural codes?
This has been used in the USA in the form of Confederate Generals’ statues. Why don’t “we” as Russians or whoever, reuse this general mantra? A person has agency only as long as he is alive. Afterwards it is natural and perfectly morally acceptable to scavenge all the possible legacy he has left as legacy of the nation.
So the “extensive apparatus” became more expensive in China too!
What about Vietnam?
We discussed the book at the book club with Frank Tsai, and the discussion was not productive really. The people kept referring to some old things in the Chinese history, that I feel are not relevant to the daily life.
Kerry Brown is a well-known scholar of Chinese culture. The Communist Party of China could not have avoided his attention.
I have read this book at a book club, and would like to share some of my impressions.
Dr. Brown is a very well known and highly regarded scholar of China. He is considered an expert both in China and overseas, up to a degree at which the Chinese government itself sometimes consults him over how China is seen in the foreigners’ view.
What is this book about?
This book is about the official attitude of the Communist Party to several of the major issues that every social institution has to consider at some point:
The views on these topics from within the Communist Party and from the outside are usually radically different. Being a foreigner, Dr. Brown usually writes about China from an outsider point of view. This is to be expected, as his audience is predominantly external. Moreover, he seems to be a strong supporter of the “outsider” point of view, the one that he calls “rigorously justified” and “empirically supported”.
This leads, however, to the conflict of visions with a large group the very object of his study. In simple words, it becomes hard to discuss China with its own ruling party members, who seem to be even speaking a language different to the one the external scholars of China speak.
One could simply say that the texts and ideas that the Communist Party speakers produce are just symptoms of being victims of the state propaganda, but dismissing such a large body of information would lead to a large loss of data. Rather than dismissing all the material that the Chinese Government and the Communist Party produce as being propagandist, Dr Brown proposes to develop a certain skill of understanding what exactly is being written “between the lines”.
This book is, in some sense, an attempt to write a field guide to understanding what the Chinese propaganda means when it says something.
Can I say that the attempt is successful?
I did write down several book titles and several other materials as a result of reading this book.
I did learn that after the year 1978, not just the economic policy of China changed, but also the government’s attitude to almost every aspect of a society from the list above.
I did find out that the Chinese Communist Party considers the broadly understood “Culture” an important tool in shaping the society and attempts to use it in its policymaking with more and more effort each year.
I did understand that the presidency of Chairman Xi is facing a challenge that is just as huge, or even more huge than the challenges encountered by its predecessors. On this issue there seems to be a consensus of both the external and internal scholars.
But have I understood what the Communist Party of China actually is? Not really. Apart from the fact that it considers itself an entity that is different from just “a party” in the Western sense of the word “party”, I did not really understand much. One analogy that is made in the book, that is intended to give a better understanding of the Party is the one of the Catholic Church, which also a grand structure that considers itself responsible for all aspects of the social life of its members. However, this analogy is not really illuminating. (Pun intended.)
Almost every aspect of the “uniqueness” of the Communist Party of China seems to have been tried by some other guys in this world. And even the “totalitarian” property is being just as successfully replicated by the Vietnamese right across the border to the South. Moreover, almost all properties of the CPC were already tried and tested by the Communist Party of the Soviet Union, and we all know how this party ingloriously terminated. The Chinese Party has obviously learned on the Soviet lessons, but whether this makes this social structure unique, I am not very sure.
On the positive side, now I understand that the Chinese Government will be paying more attention to the cultural side of life. It is going to be a hard job, as the “classical” art is all already redistributed in the nineteenth century, and is unlikely to be bought-out from the countries which already possess it. However, Chinese modern art is growing in price and is likely to become more and more interesting as it is given more attention, so “hard” does not mean “impossible”.
The natural Chinese seem to say that the book is largely repeating what they are taught at school on the politics course, so this book does not seem hugely critical. The author does cite a few dissidents from within China, but frankly speaking, their views did not do much to enlighten me either.
I cannot even make a comprehensive list of things that I would be willing consult in order to understand China better. Below is a poor attempt.
If you find this review useful, share, like, comment, re-post.
I can be followed at:
disabused | ||
preordained | ||
prodded | ||
pernicious | ||
beset | ||
beget | ||
entail | ||
People like similarity between cause and effect both in nature and in size. People feel that correlation equals causality.
0 | 1 | |
---|---|---|
0 | x | y |
1 | z | k |
Intelligence people’s work consists of writing reports. This is one of the reasons why literature is considered such an important subject in school. How many books do these people create each year? How much is declassified? How much is destroyed? How can this be compared to the FSB, GRU, and SVR? Who gives them assignments?
It is impressive how much the way a human society works resembles how a living organism works, consisting of different organs.
It is hard to make yourself think about problem solving directly.
We all love news. At the same time, we are all dissatisfied with the state of the news ecosystem of the time. At the moment we are all coping with this dissatisfaction in different ad-hoc ways. However, we all want a more efficient, controllable, and affordable (both in money and in effort) solution for getting news.
In that meeting we wanted to discuss what exactly we do, what we want to improve, both technologically and socially, with a potential of developing a novel news-related product.
“News” is a not very clearly defined thing by itself, and this document also aims to give a more or less working definition, as well as clarify the differences between the types of news.
Examples: a tsunami, a piece of software released, a president elected, I made a photo of my cat.
Examples: NVidia publishing a press-release, Government office issuing a press-release, A website issuing an update, I uploaded the cat’s photo online.
Example: Bloomberg, Reuters, a blog, a Facebook profile.
They are usually in about 90% of the cases acting as news providers, but in about 10% of the cases acting as news sources, when they have “special correspondents”.
Examples: The BBC, The First Channel (of whatever country), The Washington Post, The Echo of Moscow Radio.
Examples: Cnet.com, TechRadar.com, Habr.{com,ru}, opensource.com, medium.com.
Examples: Facebook, Twitter, LiveJournal, Mastodon, Parler, VK.
Examples: Google News, Yandex News, Yandex Zen, news.ycombinator.com, Perl Planet
Examples: Email client, Facebook website, Facebook App, AtomFeed Reader, WeChat messenger, Telegram
A hard to define precisely thing, since it is context and consumer dependent. The length of a text/video/audio can be an estimate, but a bad one.
As extreme examples, scientific papers can easily demand 8 hours per page, whereas pulp fiction can be consumed at a much higher rate. Vladimir’s personal record is the SICP book, which took 9 months to read.
Google News is a service that gives you news headlines, as well as full article bodies fetched from news providers. Google does quite a good job at identifying duplicates. Since one news source is usually later used by many news providers aiming at delivering the content, enriched and post-processed, to their users.
Google News gives you access to the full article body, fetched from one of the providers, but usually not the news source.
Google News uses a sophisticated recommendation method, that is presumably fully algorithmic (no human involvement), to recommend news to the consumers. This algorithm is heavily based on the data Google knows about the users, collected implicitly, the most used data provider being user’s search queries.
The problem with Google News is that it is hard (impossible) to “force” Google to show you more of something. It’s just not possible to make it directly subscribe to something.
Vladimir gets his news by subscribing to individual RSS/Atom feeds via an email-gateway. He gets ~50 news-related emails daily and is quite overwhelmed with them.
This solution has a difficulty in that not all news providers (or news sources) have an Email or Feed gateways. For example, Facebook disabled their gateways circa 2011.
The filtering problem could be solved by crafting various data-collection “sensors”, such as a Chrome extension or a context-sensitive keylogger, and than training a local filtering tool, but Vladimir has so far been extremely far from doing that.
That’s what most people do. They just regularly visit, say, Habr.com, and try to tune the news feed in a way that is as personalised as possible.
Most of the ways above are annoying.
What follows will present various thoughts about the news data structures, algorithms and pipelines.
News creation can be:
Often the original piece of news is very terse. There is a process that is called “augmentation” in this document, that makes that piece of news more understandable, more readable, and richer.
Example: Vladimir wrote SRFI-203, which is a technical document. Later, Vladimir wrote an article on Habr.ru, in order to announce the existence of SRFI-203, and in order to provide more context on why it is needed, and to give some examples of its usage.
Augmentation is usually done by the “news providers”, and often is tailored for their audience. This is one of the places where bias is introduced. On the other hand, leaving out augmentation entirely seems not viable, as the readers are often lacking the context.
Naturally, news providers are many, and it is hardly possible to subscribe to each of them individually, especially since many of them do not have any web pages at all, let alone feeds, especially RSS feeds.
The news, therefore, have to be aggregated.
Aggregation can be:
Naturally, filtering is crucial for any news-related ecosystem, since the amount of noise is giant.
Formatting is more important than it is usually seen. Some people are happy with just headlines. Some people prefer abstracts. Other people are into full-length articles, extended articles (long-reads), or even series of articles (a thing that is hard to define!).
A (hypothetical) perfect piece of news supports all the aforementioned levels of abstraction.
The problem here is usually that news are manufactured at a single level of abstraction. We are therefore, met with a problem of up-scaling and down-scaling information.
Since this section naturally deals with the problem of news “weight”, apart from up-scaling and down-scaling, we should mention same-scaling, or re-wording a piece of information.
Note that re-wording is tightly connected to lossless compression. Lossless compression reduces the length of a piece of news, while preserving its weight. However, its practicality seems to be not very self-evident.
In the same section, I have to discuss medium conversion. Medium conversion
Neither summarising, nor elaborating are solved problems.
The progress on summarising is a little bit better, as there are word2vec~/~text2vec
embeddings
that attempt to solve this.
Elaborating would require access to external data sources and context, and I am not aware of any progress on this matter.
There is some
progress on re-wording, at least up to the level of fooling search engines into believing that a piece of news is distinct from the other pieces.
However, this is a GAN
-like system.
Search engines are increasingly getting better at detecting auto-rewrites.
All of the progress above is generally concerned with pieces of text.
Compression is basically non-existent.
Conversion exists in the following way:
from \ to | Text | Audio | Image | Video | Digital |
---|---|---|---|---|---|
Text | No need | Good | No | No | No |
Audio | Mediocre | No need | No | No | No |
Image | Bad | Bad (via text) | No need | No | No |
Video | Very bad | Very bad (via text) | ? | No need | No |
Digital | Lossy | Lossy | Lossy | Lossy | No need |
Language translation works for an unassuming customer.
Titles (the highest level of abstraction) are usually available for free.
Classifying the news is also important, but a little bit difficult to define. Classification is not entirely the same thing as filtering, although the class of a piece of the news can be a basis for filtering it out or letting it go through.
Verification is hard to define. What is true and what is false in our new world of post-truth?
Terminals may be:
Reception tools:
By intent:
This section is hard to describe, but it is an important point of attention. The time, place, size (weight/length), grouping, format of the news form an important selling point.
For example, a consumer is driving to work. Driving requires relatively low concentration, so one may, perhaps, want to receive some news at the time. However, the sort of attention a driver may dedicate to consuming the information is limited generally to audio.
Selection in this section is not entirely the same thing as filtering. Vaguely speaking, filtering is a partition of the news into useful/useless, whereas selection is working with the news that have already been chosen to be useful, and are further selected to be the most appropriate for the user at the time/place/class.
Sharing is an important part of the news ecosystem.
Feedback is not distant semantically from sharing. In some sense, sharing is the most basic kind of reaction that a user may have.
News pieces have to be stored somewhere. News sources often do not care about making the pieces persistent to any degree.
News providers usually care a bit more about that, but even they often neglect permanence of web-links.
It is not by definition clear which news are worth storing for a long time. CCTV systems are usually wiped very soon after recording, perhaps about half-year maximum.
Indexing and searching is also a difficult question. Progress exists for text search (e.g. elastic), and there are things like “search by image”, but the state of the art I do not know.
In terms of “selling point”, this section is completely on the back-end side of the industry.
It is that kind of services that show you which words are “trending” now, what kind of news is dominating the agenda, which news produce more feedback.
If a news service is getting feedback, it inevitably has a profile of the users This information can be used to improve the news flow, as well as make money on it.
If the storage is efficient, and flow analysis is advanced, it should be possible to build “books” or “narratives” that tell a story, as a narrated sequence of events, concatenated and re-worded to the same language. Not sure this is really feasible now.
These kinds of monetisation may be present at any stage of the pipeline.
(- Spanish/French/Arabic/Japanese)
Any decent analytic work requires peer-review.
If you are invited to be a peer reviewed for this document, you are encouraged to add you comments into this section.
No conclusion so far.
Structure and Interpretation of Computer Programs – это один из самых известных учебников программирования в мире, на основе которого несколько десятков лет преподавался начальный курс программирования в MIT, а во многих унивеситетах, в том числе в Беркли, преподаётся до сих пор.
SICP использует Scheme в качестве основного (практически единственного) языка программирования. Тем не менее, нельзя сказать, что это учебник Scheme, потому что он выходит далеко за пределы того, что обычно входит в программу “изучения языка”. Стоит только упомянуть, что в программу входят системы распространения ограничений, интерпретаторы языков программирования, симуляторы цифровых схем, а также симулятор целого процессорного модуля.
Большая часть тем, входящих в учебник, выполнимы на “стандартной” (в смысле, соответствующего последнему на текущий момент стандарту Revised^7 Report on Algorithmic Language Scheme) Scheme.
Однако, для нескольких тем, встроенных средств языке недостаточно. В частности, темы случайных чисел, измерения времени выполнения, многопоточности и графического вывода стандартом языка не рассматриваются.
Принятая по-умолчанию в учебнике реализация MIT/GNU-Scheme содержит необходимые примитивы, расширяющие базовый язык так, чтобы курс становился проходим.
За многие годы, прошедшие с момента выхода SICP, некоторые реализации Scheme также реализовали многие примитивы, требуемые для прохождения курса.
Однако, до текущего момента не существовало какого-либо нормативного документа, специфицирующего, каким именно требованиям должен удовлетворять интерпретатор, чтобы “поддерживать” прохождение курса.
Scheme Requests for Implementation – это community process, принятый в семействе языков Scheme. В некоторых аспектах он Java Community Process или Python Enhancement Proposals. Так или иначе, это главный инструмент обсуждения развития языкового семейства, а также главный инструмент обеспечения переносимости кода.
Написание SRFI показалось автору сей заметки естественным выбором для формализации требований к программным системам.
В связи с тем, что ассортимент тем, предлагаемых для выполнения, достаточно велик, показалось разумным ограничить предлагаемый документ узкой, хорошо формализованной, хотя в тоже время очень абстрактной темой графического вывода.
Основой главы, посвящённой графической подсистема компьютера, в SICP послужила статься Питера Хендерсона “Функциональная Геометрия”. ( http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.137.1503, https://eprints.soton.ac.uk/257577/1/funcgeo2.pdf)
Знакомым с творчеством Морица Эшера это изображение может показаться смутно знакомым.
В основе техник функциольнальной геометрии лежат две идеи.
В КДПВ вынесена иллюстрация этого подхода.
В предложенном SRFI не предоставленно исчерпывающей реализации метода функциональной геометрии. Однако, предложен субстрат, достаточный для того, чтобы активный студент мог сам реализовать подмножество функциональной геометрии, предложенной в SICP на любом интерпретаторе, поддерживающем SRFI-203 (результат работы автора сей заметки).
Полный текст предложения находится по ссылке:
https://srfi.schemers.org/srfi-203/srfi-203.html
Абстракт и технические детали можно найти здесь:
https://srfi.schemers.org/srfi-203/
SRFI находился на обсуждении два месяца, и за это время было предложено две реализации, для интерпретатора Chibi, и для интерпретатора Kawa.
Автор сей заметки (он же автор SRFI) надеется, что предложенное им расширение языка будет позитивно воспринято сообществом. (Те, кто уже сейчас пользуются Scheme, могут предложить поставщикам своего интерпретатора включить это расширение в следующий выпуск.) Автор также надеется, что тем, кто начинает изучать Scheme, SICP или компьютерную графику, данное расширение также поможет сэкономить усилия, затрачиваемые на технические детали, и там самым освободить больше ресурсов для собственно обучения.
Те, кому кажется целесообразным выполняемая работа, приглашаются задонатить по PayPal-ссылке. Также все желающие приглашаются подписаться на обновления.
C-d – generate a translated document. C-M-v – validate tags. C-e – project settings. C-l – see project files. C-u – move to the next segment. C-g – add an item to glossary. C-f – search.
Total 160 pages.
Not the last version, but I already lost the filename. From some kernel file October 2015, Tejun Heo
Mounts the v1 filesystem and starts the services.
Is used mostly for proxying control groups to containers.
Has an actual daemon to manage control groups.
echo 10000 > /sys/fs/cgroup/cpu/firefox/cpu.cfs_period_us echo 30000 > /sys/fs/cgroup/cpu/firefox/cpu.cfs_quota
Because when Firefox eats all the CPU, it seems to be doing so with all processes.
/sys/fs/cgroup/memory/firefox/memory.limit_in_bytes
The GUI subsystem apps are: Xorg xfdesktop xfwm4 xfce4-* Thunar* /usr/lib64/xfce4* /usr/libexec/* xscreensaver scim
cpu is set by: echo 128 > /sys/fs/cgroup/cpu/gui/cpu.shares
No swap option is set by: echo 0 > /sys/fs/cgroup/gui/memory.swappiness
Using controllers v1, it seems that it’s not possible to set the ’guaranteed’ amount of RAM.
Mission-critical apps are: /sbin/* /usr/sbin/* /usr/local/sbin/* Anything that UID1 runs. $(cat /etc/shells) SCREEN /usr/bin/dbus-daemon /bin/su /bin/sulogin
cpu is set by: echo 256 > /sys/fs/cgroup/cpu/system/cpu.shares
memory: I don’t know how to give a minimal memory guarantee to an app using v1 controllers.
sys/fs/cgroup/memory/memory.memsw.limit_in_bytes Needs to have the value of swap size. command: free -b | awk ’/Swap {print $3}’
I need to add it to cgrules.conf, right?
CONFIG_CFS_BANDWIDTH, cpu controller Seems weird, as if I have to make a group for every process out there.
echo 100000 > /sys/fs/cgroup/cpu/cpu.cfs_period_us echo 360000 > /sys/fs/cgroup/cpu/cpu.cfs_quota
#!/bin/sh
cgdelete -g cpu:“$*” fi #if [ “$*” != “/user” ]; then
#fi
#!/bin/sh . /etc/rc.d/init.d/functions
start() { echo -n $“Setting the cpu cgroup release agent: ” echo “/sbin/rc.auto_cpu_cgroup_remover” > /sys/fs/cgroup/cpu/release_agent
for username in $(awk -F: ’$3 >= 1000 && $1 != “nobody” {print $1}’ /etc/passwd); do
cgcreate -g cpu:/$username/private -t $username:users -a $username:users –dperm=755 –tperm=755 –fperm=755 done echo I also need to add a dynamic rule to the cgred service… TODO chmod +x /etc/profile.d/00lwf_bash_group.sh chmod +x /etc/profile.d/00lwf_bash_group.csh return $? } stop() { echo -n $“Clearing the cpu cgroup release agent: ” echo “” > /sys/fs/cgroup/cpu/release_agent chmod -x /etc/profile.d/00lwf_bash_group.sh chmod -x /etc/profile.d/00lwf_bash_group.csh echo -n $“Clearing user groups.” for dirname in $(find . -type d -not -path ’.’ -not -path ’..’ -printf “%f ”); do cgdelete -r -g cpu:/users/private done return $? } status() { echo $“Release agent: ” cat -t /sys/fs/cgroup/cpu/release_agent echo $“Profile status:” file=/etc/profile.d/00lwf_bash_group.sh for file in {“/etc/profile.d/00lwf_bash_group.sh”,“/etc/profile.d/00lwf_bash_group.sh”} ; do if ( -x “$file” ) then echo “File ‘$file’ is executable” else echo “File ‘$file’ is not executable or found” endif done return $? } case “\(1" in start) start RETVAL=\)? ;; stop) stop RETVAL=\(? ;; status) status RETVAL=\)? ;; restart) stop start RETVAL=$? ;;
*) echo $“Usage: $0 {start|stop|status}” RETVAL=2 ;; esac
exit $RETVAL
Add these lines to the doinst.sh ln -s /etc/rc.d/rc.lwf_lennarts_bash_trick /etc/rc.d/rc3.d/S00cpu_cgroup_remover ln -s /etc/rc.d/rc.lwf_lennarts_bash_trick /etc/rc.d/rc4.d/S00cpu_cgroup_remover
if [ “$PS1” ] ; then #mkdir -m 0700 /sys/fs/cgroup/cpu/user/\[ agroupname=/users/$(whoami)/private/\] cgcreate -g $agroupname echo $$ > $agroupname/tasks fi
/bin/echo “I have no idea how to implement this in C-shell.”
I’m not sure about the next line: Is one millisecond a lot or not? echo 1000000 > /proc/sys/kernel/sched_min_granularity_ns
TODO: awaits moderation
So I can just as well cofigure PAM to juggle groups But I will defer this till version 2.
This file contains unknown words and their explanations I found during reading the manual of Org-mode 9.1.9.
TODO
TODO
This file is written while I was reading “SSH Mastery” by Michael W. Lucas.
Answer: TODO
Answer: TODO
Answer: TODO
Answer: TODO
Answer: TODO
Answer: TODO. I do have a /var/run/utmp , although no /run/utmp .
Answer: TODO.
Answer: TODO
Answer: org-mode?
Answer: TODO.
w and who definitely parse utmp (which is probably /var/run/utmp on Slackware Linux). But how does Finger find out who’s logged in?
Modern Fortran is quite an advanced language, which is still largely compatible with the old Fortran from the 50s.
It has a very rich (and it is not a good thing) syntax.
! Hello, this is a comment print *, "Hello, world"
Hello, world
program main use, intrinsic :: iso_fortran_env, only: output_unit print *, "Hello, world" end program main
Hello, world
Call for Papers
Приглашение выступить с докладами
The 2020 Scheme and Functional Programming Workshop is calling for submissions.
Конференция Scheme and Functional Programming Workshop 2020 открывает подачу заявок на доклады.
The Scheme and Functional Programming Workshop is a yearly meeting of programming language practitioners who share an aesthetic sense embodied by the Algorithmic Language Scheme: universality through minimalism, and flexibility through rigorous design.
Scheme and Functional Programming Workshow – это ежегодное мероприятие, научно-практическая конференция, на которой собираются специалисты про языкам программирования, эстетически или технологически отвечающим основным принципам, заложенным в Алгоритмическим Языке Scheme: минимализму, сохраняющему универсальность, и тщательному проектированию, обеспечивающему гибкий дизайн.
We invite high-quality papers about novel research results, lessons learned from practical experience in industrial or educational setting, and even new insights on old ideas. We welcome and encourage submissions that apply to any language that can be considered Scheme: from strict subsets of RnRS to other “Scheme” implementations, to Racket, to Lisp dialects including Clojure, Emacs Lisp, Common Lisp, to functional languages with continuations and/or macros (or extended to have them) such as Dylan, ECMAScript, Hop, Lua, Scala, Rust, etc. The elegance of the paper and the relevance of its topic to the interests of Schemers will matter more than the surface syntax of the examples used. Topics of interest include (but are not limited to):
От докладчиков ожидаются статьи о передовых научных результатах, а также отчёты о практических достижениях, как в инженерной, так и в образовательной сферах, а также развёрнутые предложения по рассмотрению идей сверх-ранней стадии или переосмыслению старых подходов. Приветствуются и поощряются работы, имеющие отношение к любым языкам, входящим в семейство Scheme: от чистых подмножеств RnRS, до “вариаций на тему Схемы”, таких как Racket, других диалектов Lisp, включая Closure, Emacs Lisp, Common Lisp, и иных функциональных языков, поддерживающих замыкания и/или макросы (или имеющих таковую поддержку в средствах расширения языка, например, библиотеках). Примерами подобных могут являться Dylan, ECMAScript, Hop, Lua, Scala, Rust и ещё множество других. Общая элегантность работы и релевантность заявленной тебе будут являться более важным критерием оценки, нежели отдельные элементы синтаксиса, в котором реализованы примеры. Наиболее интересующие темы включают (но не ограничиваются):
Interaction: program-development environments, debugging, testing, refactoring Implementation: interpreters, compilers, tools, garbage collectors, benchmarks Extension: macros, hygiene, domain-specific languages, reflection, and how such extension affects interaction. Expression: control, modularity, ad hoc and parametric polymorphism, types, aspects, ownership models, concurrency, distribution, parallelism, non-determinism, probabilism, and other programming paradigms Integration: build tools, deployment, interoperation with other languages and systems Formal semantics: Theory, analyses and transformations, partial evaluation Human Factors: Past, present and future history, evolution and sociology of the language Scheme, its standard and its dialects Education: approaches, experiences, curricula Applications: industrial uses of Scheme Scheme pearls: elegant, instructive uses of Scheme
Взаимодействие с разработчиком: средства написания программ, интегрированные среды разработки, отладка, тестирование и рефакторинг Реализация вычислительных движков: интерпретаторы, компиляторы, инструменты, сборщики мусора, бенчмарки Средства расширения языка: макросы, гигиена, доменно-специфичные языки, рефлексия, а так же как таковые влияют на процесс разработки Выражения: средства управления, модульность, полиморфизмы разных видов, типы, аспекты, модели владения, параллельные вычисления (конкурентные вычисления, многопоточность), иные способы параллельного выполнения, недетерминированные вычисления, вероятностные вычисления, и тому подобное Средства интеграции: инструменты сборки, развёртывания, взаимодействия с другими языками и системама Формальная семантика: теория, анализ и преобразования, частичное выполнение (partial evaluation) Человеческий фактор: прошлое, настоящее и будущее, эволюция и социология языке Scheme, стандарт и диалекты Образование: подходы, практические отчёты, образовательные программы Приложения: промышленные применения Scheme Искусство: элегантные и красивые трюки и применения Scheme
Important dates
Submission deadline is 15 May 2020. Authors will be notified by 12 June 2020. Camera-ready versions are due 30 June 2020. All deadlines are (23:59 UTC-12), “Anywhere on Earth”.
Даты
Окончание подачи заявок: 15 Мая 2020 Уведомление о рассмотрении заявок 12 Июня 2020 Финальные версии работ для печати: 30 Июня 2020 Под датой дедлайна подразумевается таковая в любой точке Земли (23:59 UTC-12)
Submission Information
Paper submissions must use the format acmart and its sub-format acmlarge. They must be in PDF, printable in black and white on US Letter size. Microsoft Word and LaTeX templates for this format are available at:
http://www.sigplan.org/Resources/Author/
Информация по формату подачи
Финальные бумажные версии работ должны использовать формат acmart и подформат acmlarge. Они должны быть в PDF, и допускать печать на бумаге формата US Letter. Microsoft Word/LaTeX шаблоны можно скачать:
http://www.sigplan.org/Resources/Author/
This format is in line with ACM conferences (such as ICFP with which we are colocated). It is recommended to use the review option when submitting a paper; this option enables line numbers for easy reference in reviews.
Этот формат согласуется с общепринятым форматом конференций ACM (включая ICFP, которая проходит в том же месте в то же время). Рекомендуеся использовать опцию “review” для нумерации строк, это упрощает рецензирование.
We want to encourage all kinds of submissions, including full papers, experience reports and lightning talks. Papers and experience reports are limited to 14 pages, but we encourage submitting smaller papers. Lightning talks are limited to 192 words. Each accepted paper and report will be presented by its authors in a 25 minute slot including Q&A. Each accepted lightning talk will be presented by its authors in a 5 minute slot, followed by 5 minutes of Q&A.
Конференция приветствует все возможные виды докладов, включая полноценные научные статьи, практические отчёты и блиц-доклады (lightning talks). Статьи и отчёты ограничены в объёме 14 страницами, но и меньшие размеры приветствуются. Блиц-доклады ограничены 192 словами. На каждую статью или доклад отводится слот в 25 минут, включая вопросы. На блиц-доклад отводится два пятиминутных слота, один для доклада и один для вопросов.
The size limits above exclude references and any optional appendices. There are no size limits on appendices, but the papers should stand without the need to read them, and reviewers are not required to read them.
Ограничения по размеру не распространяются на ссылки и приложения. На размер приложений ограничения отсутствуют, однако материал должен читаться без них, и при рассмотрении заявок они могут не учитываться.
Authors are encouraged to publish any code associated to their papers under an open source license, so that reviewers may try the code and verify the claims.
Proceedings will be printed as a Technical Report at the University of Michigan and uploaded to arXiv.org.
Publication of a paper at this workshop is not intended to replace conference or journal publication, and does not preclude re-publication of a more complete or finished version of the paper at some later conference or in a journal.
Публикация исходного кода, связанного с изложенным материалом, под открытой лицензией, для того, чтобы рецензенты могли проверить сделанные утверждения, приветствуется.
Труды конференции (Proceedings) будут опубликованны в формате технического отчёта (Technical Report) Мичиганского Университета (University of Michigan), а также загруженны на arXiv.org
Публикация работы на данной научно-практической конференции не предполагает исключительности, и оставляет возможность для публикации позднее более полной или развёрнутой версии работы на иной научной конференции или в научном журнале.
https://icfp20.sigplan.org/home/scheme-2020#Call-for-Papers
Оригинал: Jason Hemann, Northeastern University
Оргкомитет Michael D. Adams (Program Co-Chair), University of Michigan Baptiste Saleil (Program Co-Chair), IBM Canada Jason Hemann (Publicity Chair), Northeastern University
Программный комитет Michael D. Adams (Program Co-Chair), University of Michigan Baptiste Saleil (Program Co-Chair), IBM Canada Maxime Chevalier-Boisvert, Université de Montréal Ryan Culpepper, Czech Technical University Kimball Germane, University of Utah Yukiyoshi Kameyama, University of Tsukuba Andy Keep, Cisco Systems, Inc Julien Pagès, Université de Montréal Alexey Radul
Направляющий комитет Will Byrd, University of Alabama at Birmingham Will Clinger, The Larceny Project Marc Feeley, Université de Montréal Dan Friedman, Indiana University Olin Shivers, Northeastern University
]]>(setq org-latex-listings-options '(("numbers" "none") ("frame" "single") ("basicstyle" "\\footnotesize\\ttfamily ") ("keywordstyle" "\\ttfamily") ("upquote" "true") ("extendedchars" "true"))) (setf (alist-get "hx" org-export-smart-quotes-alist nil nil #'equal) '((primary-opening :utf-8 "”" :html "”" :latex "\"" :texinfo "’’") (primary-closing :utf-8 "”" :html "”" :latex "\"" :texinfo "’’") (secondary-opening :utf-8 "’" :html "’" :latex "\'" :texinfo "`") (secondary-closing :utf-8 "’" :html "’" :latex "\'" :texinfo "'") (apostrophe :utf-8 "’" :html "’")))
#+latex_header: \DeclareBibliographyCategory{badbreaks} #+latex_header: %\addtocategory{badbreaks}{sicsc,software_plantuml} #+latex_header: %\AtEveryCitekey{% #+latex_header: % \ifcategory{badbreaks} #+latex_header: % {\defcounter{biburlnumpenalty}{9}} #+latex_header: % {}}
This should be allowing you to make your paper into a publishable piece in under 1 month
It should be possible to fix the paper using 4-5 prints. (~20$)
This document records some practices I found myself using while reading texts. It is not at all exhaustive, complete or even sufficiently encompassing. It does not claim to be efficient. It is a work in progress. It is open for discussion. I intend to write down here some practices that I find useful, sometimes providing justification.
Reading is a thing that can be computer-assisted nowadays, and if it can be, it should be. I try to do the following:
Unless the book is about illustration or graphics, black-and-white is enough. I print it a typographic shop, ask for paperback binding and manually write the name at the book spine.
I usually read PDF books, about 90%. I have the PDF file open either in Evince, or in Emacs pdf-tools. This simplifies searching. In 10% of the cases this is either an HTML or an EPUB version, which I still usually convert to PDF, or try to open in Evince.
I end up googling quite a lot, so I have Google open in the browser (Firefox) for quick access.
I usually use Google Translate, and it’s enough just about 90% of the time. In the 10%, I use Wordreference, BKRS and mdbg.
I use a mechanical pencil (I do not like sharpening pencils), 0.9HB. I have an eraser too, relatively hard. I do not use coloured pens or pencils for annotation.
I use a ruler to focus on reading, and to sometimes demarcate important pieces of the text.
I use it when reading Chinese material. Finding individual characters for input tends to be easier using handwritten recognition, than by radical search. Using CangJie may be easier, but I do not know it yet.
I use Emacs org-mode’s time tracking capabilities for measuring how much time exactly a book takes. It also helps me stay focused on reading, since I do not want to obscure time tracking data.
I try to have note files for books I read. Emacs, and other special software has special functions for making notes, but I did not find a way to use them efficiently. This only relates to humanities, light, or fiction literature. Scientific literature I process differently.
The notes file is an org-file. It consists of two root nodes: a node for vocabulary and a node for remarks.
The vocabulary node is just called “vocabulary”. It contains just a single table of the following format:
# | unknown word or phrase | translation |
---|---|---|
0 | sepulka | сепулька |
I usually do not fill in the dictionary at the moment of reading. I consider this a separate task, to be done later. (Maybe this should not be done like this?)
In the Citations and Remarks node, every heading corresponds to a piece of the text that made me generate a non-trivial thought. The thought is written in the body of the heading.
This is where have both the digital and the paper copy comes in handy. When I find an interesting piece of text, I can search it in the electronic copy and copy-paste into the notes file.
I heard that it is recommended to use a “sliding window” to read text. I do not use it. However, I use a ruler to protect my eyes from wandering ahead of the narrative. I put it right under the line I am reading an move down after the line has been read.
As mentioned above, I do not use colours for annotations, because I don’t know how to make them efficient. (Suggestions welcome.)
Apart from colours, there are the following markup tools:
I tend to underline words and phrases that attracted my attention. If they seem noteworthy, I then copy them into the notes file remarks section. I tend to circle the words that are unfamiliar or unknown. I try to copy them to the notes file vocabulary section. I sometimes use a ruler to mark some extremely important pieces of text.
I sometimes circle “large sparse” pieces of text which make little sense and cross them out with several strokes to obscure.
Sometimes I write remarks on the margins. Sometimes I squeeze remarks in between the lines. Both of the above are not very efficient.
I tend to google work meanings if I don’t understand them on the spot, but do not write the meanings into the table. This is because I want to visit the vocabulary again, and have some “context to remember”.
I tend to google concepts that I do not know, but I do not write them anywhere. (Shall I have the third section in the notes file?)
Do not try to make it a full-scale university-level essay. It would be a waste of time. But try to reiterate all the thoughts that you found useful, well versed, or non-trivial.
This file is about learning Cinelerra. I decided to learn Cinelerra because it seems a capable video editor for many operating systems. The context is the following: international travelling is at a minimum now, because people are afraid of the “Global Coronavirus Pandemic” and the conference have been cancelled.
So we need to have a “video presentation”. What does it even mean?
Let’s say, it is going to be a “Video Review” on the subject described in a paper.
A video review should serve the following purposes:
What do I need then?
What can I start with? Record a video of myself “just talking” about SICP?
In the Cinelerra manual, there is no exercises. It’s a shame! Can I write them?
There are common terms used often. Some of them are listed in the manual, but I will repeat them any way for completeness.
Brightness - Light intensity. Sometimes marked with Y. Hue - colour as in “ratio of pure colours”. Saturation - distance from grey.
What is BT601 and how is it different from BT709 and BT2020.
https://blog.maxofs2d.net/post/148346073513/bt601-vs-bt709
It is the set of coefficients used for transcoding RGB->YUV.
The blue dot means that the video is paused.
This is a format for encoding/compressing raw pixels. Encoding/compressing, because the data is not compressed mathematically, but some data is simply dropped. Presumably, this is enough to keep the image “visually identical”. The block is always two pixels high (I guess it is somehow connected to interlacing), but several pixels wide. The first number is the width. The second is the number of pixels used in the first row. It probably makes sense to make it a divisor or the first number. The second is the number of pixels in the bottom row. It probably should be a divisor of the first two numbers.
However, in principle, having a good algorithm may allow for different combinations.
FourCC is a stupid extension to AVI container format, a number designed to indicate which encoder exactly was used to encode the image if/when it is mpeg4-family.
You can check what a camera can stream with:
At least for recording a screencast.
Hm… can OBS actually record several tracks into a file, rather than an already combined track?
A long, confusing, self-inconsistent and contradictory “review” is following.
This book is very messy. It uses a very obscure language, and it is missing supporting links for several critical statements. (It does provide references to many other statements.) Even the amount of words I had to write down into the “learn later” list was way smaller than similar texts give me, as the confusion was mainly coming from the misuse of easier words, rather than from using many complicated ones.
As the book is badly written, and extracting the meaning from the text is very hard, this review would have to be in the form of “what thoughts the book made think”, rather than “what is the book about”.
So, the whole book is based on the concept of “imaginary”. This book is more about a Utopian fantasy than about anything existing. Butler herself is supporting her right to do so with a weakest supporting argument of all times “would you like to live in a world in which no-one thinks about such a development perspective”. Indeed, this can be rephrased as “you what, care too much about what I write?”. It is basically an appellation to Freedom of Speech, that is a last resort.
I do grant her that right, but then I would rather have to be using this text not as something describing anything working, but rather as an exercise in reading and extracting whatever meaningful thoughts there may be in a deliberately obscured text. An exercise in philosophising rather than an exercise in philosophy.
The author seems to be determined that the human world in general can be described using essentially two main tools: politics and psychology. Not digging very deeply, she seems to be using Freud and his theory as a source of psychological theory, and Hobbes as the source of political theory. While both are very respectable founders of the fields, it staggered me she seems to be regarding them so highly even being aware (?) of the modern state of research. The idea here, perhaps, is that politics describes human behaviour “en masse”, whereas psychology describes interpersonal (she uses the word “dyadic”, which is already strange enough) interactions. But, it’s obvious, but someone has to say that: human societies are so much more complex than just one-to-one and one-to-many interactions!
Butler seems to love Freud. It is a little surprising, given that Freud is not, probably, the most advanced source of psychological knowledge nowadays. Indeed, he was extremely instrumental in founding the field, but that was so long ago, and much more substantive research has been done since.
She also seems to be focused mostly on Hobbes when looking at political theory. She acknowledges the existence of Locke and Rousseau, but very superficially, and mostly with respect to the “state of nature”. (Rousseau is mentioned twice less than Hobbes, and Locke’s theory is even completely ignored. Furthermore, why do we even need the “state of nature” in this discussion?) And again, why are we focusing on the founders so much more than on the ones who attempted to improve the theory?
She tries to “imagine” the world that is non-violent (which is a very confusing an convoluted term that she spends a lot of time describing). This world, she argues, has to be based on the “ethics of interdependence”. The value of life, she argues, is based on the “grievability” by other people, then.
The argument she’s trying to build, jumps from the imaginary “freedom as independence” into the “freedom of total inter-dependence”. It jumps as if there is no middle ground.
She does not completely ignore the existence of groups, but entirely ignores the very concept of group dependence. Indeed, people cannot be fully independent, but total dependence is also not how things work. Freedom (the word she doesn’t use a lot) means that people choose whom upon to depend.
The groups that appear in her text are mostly the groups of similarity, and most often the groups of blood relatives. She also speaks about the groups of grievability and groups of power, but almost entirely ignores the groups of friendship and cooperation. Whereas those constitute those groups that are actually worth living for.
The concept of “living” plays a large role in her argument, and she writes a lot of words to try and describe what it is to be alive, and how we regret the loss of life. She even proposes to value lives according to how much we would regret the loss of those lives…
But why would we even do that? That’s very human and emotional to grieve after the loss. But that very notion has been long proven to be one of the least useful in the world. As an English proverb says “There’s no sense crying over spilt milk.”
The value of the life that is already lost is then known precisely, and equal to zero.
The confusion is exacerbated by the fact that she dodges the urge to at least define what it is to be living. I know that some mental gymnastics needs to be done in order to declare the “green” position life-preserving and the “pro-life” position non life-preserving, but hey, it doesn’t have to be that inconsistent. (It is also discomfiting how much American politics is repeatedly influencing the world outside the U.S. in a way largely irrelevant to the outside world.)
The concept of imagination is used a lot in her treatise. This is indeed a place where this book has proven to be useful. Indeed the result of her imagination I consider to be worthless, but the way she self-reflects on imagining the world, and also tries to model the way other people imagine the world, made me think a lot.
I have never really thought about the “imagination machinery” in the human brain. And I really like the concept of phantasy, distinct from fantasy by the presence of subconscious component. I really liked to think about different kinds of imagination as in: imagining scenes, imagining words, both written and spoken, imagining 2d objects, imagining feelings, and much more.
I think that the “imagination software” in the brain is really worth exploring.
The part of the treatise that is dedicated to saving and killing largely rotates about the desire to destroy and the desire to save as given by Freud. I cannot say that I can distil any meaningful conclusion from her words. Moreover, all the discussion seems very contrived and produced only in the name of deriving certain political slogans of the day. That is, it looks (to me) as largely just fitting the argument to the answer that the author already believes to be true. The very structure that is dedicated to preserving life, she considers to be a manifestation of a “dominance hierarchy” that is worth bringing down. Indeed, often such structures become corrupt, but she produced no decent substituting concept, besides doing some mental gymnastics modelled after Kant and Freud.
She does give a great account on the police in the USA killing people, especially black people. The language of those parts of the book is much more lucid and provides a much more vivid image. This makes me think that as a political professional (e.g. a political campaign mastermind) she could have had a role that would fit her much better than the one of a professional philosopher.
This section will just list a few thoughts that I don’t think actually fit into any reasonable piece of argument, but are worth scavenging from the book.
Obviously, as mentioned previously, she mentions Freud and Hobbes. As one of the Freud’s successors, she speaks about Anna Klein. She cites almost all famous Marxists of the 21st century, starting from Althusser. Laplanche and Derrida, obviously, creep into the narrative too, how could they have not. Foucault and Fanon even get their own chapter.
Melanie Klein seems to be the only psychologist that she seriously considers, besides Freud. Perhaps, worth looking into.
Unmentioned in the book, but seems related to me is the work of the late Sir Roger Scruton – Art and Imagination: A Study in the Philosophy of Mind.
The most disappointing part of the book is that the actual analysis of the non-violent action is given as little attention as possibly can be given so long that the books still bears some relevance to the non-violent protest. Apart from “using human bodies as a wall” and “coming to the shores of Europe in boats full of people”, not much is spoken about ethicality (or the absence thereof) of different kinds of peaceful protest. Strategy, tactics, effectiveness – all of that is mostly ignored, apart from insisting that the structural violence itself would always try to present peaceful action as violent. As if we wouldn’t know that. A lot is said about self-defence, but very little about extralegal defence of the others. In particular, the subject of defending humans against dangerous forces of nature is completely ignored.
When you are approaching someone, who has been well-known as an adversary of the forces you generally sympathise with, there may be several expectations. You may expect the text to be an outrageous demagoguery, aimed at appealing to emotions and ignoring any traces of rational. This at least gives you the feeling of guilty pleasure by imagining a picture of a wild combat of ideas. You may expect a cleverly twisted argument, that is crafted so well that you find it very hard to penetrate the logic and to find flaws. Then it is upon you to sharpen your mind in order to have a proper duel with your opponent. You can expect to be mistaken, and be exposed to the ideas that have not yet had a place in your mind, and that would be the best possible option. Getting enlightened, after all, is one of the best feelings in the world.
What you probably do not expect, although you should, is the book to be just lacking any sort of cohesive picture in it. Is it of the aspects of the “banality of evil” that Hannah Arendt was writing about? “The Force of Non-Violence” is from the last category. Of course, I am not equating Butler with any of the horrible evil-doers of the world.
But the book still leaves me with that creeping in feeling: “How can it be that someone who manages to take simple things and express them in a totally incomprehensible way happens to be one of the most prominent philosophers of our time?”
It leaves you with a feeling that you have missed something. Is that “something” ultimately incomprehensible to just that kind of people that you belong to?
But no, over and over I keep seeing in this book only an exercise in philosophising and nothing else. Chunks of not very consistent reasoning interspersed with literature reviews of various philosophers and journalists. Attempts to make a well-structured text that keep failing over and over.
On the other hand, at least it has made me produce the longest so far book review I have made. At least that I should be grateful for.
Subscribe and donate if you find anything in this blog and/or other pages useful. Repost, share and discuss, feedback helps me become better.
I also have:
Перефразируем: постоянно работая с китайцами, учёными, программистами, физиками, кем угодно, я никогда не ощущал, что отличаюсь от них. Так ли это?
Ну, ладно, как минимум, читать книжки про себя самого не так плохо. Вообще, надо внимательнее подумать на эту тему.
Женщинам сложно, но, видимо, можно научиться не ревновать друг друга. Пожалуй, это одна из самых умных мыслей книги. Мужчины ревнуют, чтобы отнять или защитить. Женщины ревнуют, чтобы не допустить товарку вверх. Русские в этом смысле ревнуют намного более по-женски, нежели по-мужски, в среднем.
В частности, из-за этого у женщин лучше командная игра в "несущественных" вещах, когда они чувствуют свободу от потребности "одёргивать" друг друга. Надо строить стены. Придумывать контекст, в котором было бы ясно, что вы "не в одной банке".
Книжка на три с плюсом. Та часть, которая "про женщин" – откровенно слабая. Когда она пишет о "историческом заговоре против женщин", это выглядит дико. (Хотя, вероятно, какие-то из этих аргументов могут помочь в дискуссии с шовинистами.) Когда она пишет про те черты, которые, якобы "более свойственны женщинам", это выглядит жалко и ни на чём не основанно. Метафора хрустальной туфельки и армейских ботинок не раскрыта совершенно. Глава про время – очень мутная, и видно, что автор её не очень понимает. Ну, то есть, идея о том, что не надо жалеть лишнего, если ты "на своём месте" или "это не твоё", здравая, но тоже раскрыта очень плохо. Типа, "не носите одежду не по статусу" соседствует с "одевайтесь так, чтобы о вас думали то, что вы хотите чтобы о вас думали".
С другой стороны, главы про менеджмент и офисную работу написаны со знанием дела и довольно зубасты. Советы осмысленны. Вот только крайне мало в них специфично женского. Ну, то есть, призыв аллегорически относиться к предприятию как к семье, а ля "будь мамой" – так мужчинам то же самое можно посоветовать. Энивей, советы по работе с людьми содержательны, ради них стоит книжку пересматривать время от времени.
Тут поневоле подумаешь, что нет вообще никаких таких черт у женщин, в которых бы они отличались от мужчин, настолько слабо написаны главы, нацеленные на применение женских черт. Хотя я, честно говоря, думаю, что это скорее госпожа Чжу просто на самом деле не умеет их применять.
Почему, собственно, типы местности даны в главе Фа, а не Ди? Кажется, что и Сунь Цзы она понимает как-то не очень.
Пожалуй, самый содержательный совет, который даётся в книжке – это совет очень внимательно смотреть, чтобы женщины в коллективе не топили друг друга. Ради этого даже предлагается строить между ними некоторые стенки. (Видимо, кроме напарников.) Это интересно, что для мужчин победить самому важнее, чем завалить соперника. А женщины считают более важным не допустить конкурента, нежели победить самой.
Хотя в сексуальном плане это не бессмысленно. Если мужчине дала тёлочка, это же очень приятно, а то, что у неё может быть муж – это досадное неудобство, но не более того. А для женщины наличие беременной "второй жены" – это дорого. Это отнимает много ресурсов, которые могли бы идти на себя.
The “Art of War”, not such a long time ago used to be fairly unknown to the western audience. Maybe because there had already been a well-established western military tradition, represented by von Clausewitz. The Soviets, however, perhaps due to being actively involved with Asia since the very establishment, were much more open to the Eastern tradition, and the Art of War was a part of the Soviet intelligence curriculum for a long time. Eventually the West has also fallen victim to this millennia-old book on strategy and tactics, and the Art of War began its triumphant parade over the business culture. This was also partly fuelled by the very visible progress that the Asian cultures, from Mongolia to Japan, had by the end of the 20th century. One of the most prominent marks of the era was a recent British independence slogan “Will the Brexit Referendum make London the Singapore-on-Thames?”. Such an astonishing claim to be made in the former capital of the world.
The book by Ms Choo introduces the concept of the 21st century being the “Century of Asia” at the very beginning of the book. Indeed, although the book is full of citations from the Art of War, my feeling was that they were cherry-picked with the main purpose of adding Ms Choo’s book an Asian flavour, rather being the core knowledge carrier.
Maybe it’s actually for the best. To each his own, I am still planning to lay my hands on the old book itself, and thankfully, the “for Women” version was not much of a spoiler.
The second claim that is firmly made in the book is that the 21st century is going to be the century of women. That’s probably a thing that nobody is going to argue against, since the technology is making the world a much more comfortable place for women, so even though I would be more careful about attributing the whole century to just a since human trait (sex), we are definitely going (in fact we already are) seeing more female influence on the everyday life.
The book is roughly structured around five aspects of success.
They bear fancy Chinese names, but for simplicity I am just writing them out in layman terms:
Each chapter is dedicated to one of the components of success, and in each of those the author tries, beyond presenting the basic concept of a component and the actions that are needed to nurture this component, some additional traits that are supposed to be more related to women than men.
Quite unsurprisingly, as the book progresses, each subsequent chapter bears less “femaleness” and more “practical guidelines”.
In general, I cannot say that I highly assess the parts that are more dedicated to the sexual dimorphism. The management chapters were quite good. I have made a lot of records, and rearranged some of my working practices, following her advice. The “female” parts, to be honest, did not help illuminating how exactly sexual dimorphism affects the differences of performance between the employees of different sexes too much. The book contains quite a lot of female encouragement and debunking of old misconceptions about women, which is a great thing, but seems to be of less use to those who have already given up those misconceptions. However, developing “conceptions” (pun intended), that is the understanding when and how exactly sexual differences can be used as a leverage, and where they should be kept in mind as a thing of concern, is sub-optimal, in my opinion.
The book is short, and can be done in a couple of evenings. It is probably worth reading as an encouraging material, although women who have already decided to be the best of themselves, are unlikely to need any more encouragement. Some anecdotal evidence is nice, it feels very nice to be able to relate yourself to some real-life examples. Some paragons of female success are also given, from various areas of life, from politicians to doctors and managers. The management-related, gender-agnostic chapters are just good and worth looking even more than once. Is it the book to be read to find out about gender differences and how to use them – perhaps not, and it is also not a good book about the “Art of War”.
The main military thought that you can derive from it is that avoiding defeat may be often a much more efficient strategy than winning a battle. After all, Suvorov is believed to be emphasising this a lot.
I have read the “Madness of Crowds”. It is a book about several kinds of inequalities in the society, to which a lot of effort has been paid in order to compensate for them, and although up to a certain point a lot of this effort paid off, recently the effects became more controversial than working.
Douglas discusses four big (in the amount of text dedicated to them) inequalities, and many more small ones.
I think that the main point that should be taken away from the text is that much more thinking needs to be done before deciding on an important issue, even if this issue may seem perfectly obvious to the referential group. If someone is offering you a “clear solution” to an issue, doubt it even if it is a direct extrapolation of the solution to the same issue as it used to be in the past. Doubt it if is the same solution to a different issue, no matter how similar it may look. Doubt it in any circumstances.
Another thing that I take out of his book is: read the classics. Not the classical fiction, but the classical thought. The older guys, like Democritus, Protagoras, Plato, Aristotle, Socrates, Confucius, Babylonians, they have been outdated and superseded… but it is astonishing to see how slowly the process of change goes in the human nature, compared to the human tools.
There is also one thought that is not terribly new, at least I heard it several times from different people, but which, I tell myself, is worth repeating. When listening to people giving advice, try to distinguish the people who are giving you good advice because they want you to become better from people who give you bad advice because they want you to fail.
The book is not too long, a native speaker can probably get through it in a couple of evenings.
Subscribe and donate if you find anything in this blog and/or other pages useful. Repost, share and discuss, feedback helps me become better.
I also have:
construe acquiesce squeal concomitantly plash collostructional waning bray neigh snuffle warble grawl whoop hoot coo guffaw pshaw
Is it a capability to be inserted into other constructions? Or ability to accept subconstructions?
Is it the capability to accept an object? As in “greet someone”, but not “sit a chair”.
It is super annoying because many people have the same name.
way
-construction has been mentioned many times, but no example has been given.verb describes the means whereby a reaction or an emotion is expressed.“ What is a subitem of what? As it is written, it seems that the ROC is a subitem of the ”means subschema“, not the other way round. I would have written it as ”Similarly to the way-construction, in the ROC, in the means subschema, the verb describes the means whereby a reaction or an emotion is expressed.“ As ”in the means subschema“ is a detalisation of the previous entity (that is the ROC).
It took me about 6 hours to read the full text. These 6 hours spanned 2 long and 1 short session.
The main thing that is worth mentioning is the almost total absence of the research protocols. (There is just one regular expression presented for querying some not very well specified database.) This research is therefore not reproducible. Neither the databases queried are well specified (except OED), nor the analytical procedures in the form of a code, or at least a natural language numbered list of actions.
The second thing that is probably not that strictly required, but would be almost obvious to include is the application part. Nothing is said on the application part of this study. Clearly, the most obvious application would be a software subroutine for identifying ROCs in a text to aid the readers and even more importantly, the translators in spotting those expressions in a text. Secondly, a software tool could help in identifying which subschema of the ROC is employed in each particular case, and thus aid human or AI translators in finding better expressions in the target language.
Thirdly (although way beyond the scope of this paper), a tool could be written to identify non-ROC patterns in the source non-English languages that would admit a solid translation into English with the help of a ROC.
Fourthly, it would be interesting to forecast which verbs, which are non-transitive yet, are the most likely to undergo transitivisation in the nearest future. Such a forecast, if successful, can make present day software more future-proof and more robust. If not successful, the forecast would serve as a falsifying tool (in the Popperian sense), and would indicate that some deeper process may be actually taking place, and suggest reassessing the described phenomena in a more abstract framework.
In any case, it was a fascinating reading. I am happy to be introduced into the world of language analysis.
Richards J.Heuer Jr.is one of the people who revolutionised the way intelligence content is produced in the Central Intelligence Agency of the U.S.A. Speaking crudely, his main contribution was the introduction of the "Scientific Method" into the everyday routines of the CIA analysts. This book is partly his self-reflection on this transformation, and partly a list of heuristics that any intellectual worker could employ to improve his own efficiency (and self-satisfaction). I found it very good. It clarified quite a bit of concepts I had been only vaguely aware of, and helped me hone a few of my own ideas.
I actually recommend reading it to everyone, and perhaps would even suggest studying it at school, because it is hard to find a skill of more generality than a skill of thinking. And the intelligence aroma just makes the book more exciting for kids.
If you are interested in more detail, welcome under the cut.
In English, "intelligence" can mean several things, the best known being the skill of thinking, and the profession of preparing reports about the world to the Head of a State. The coincidence of these two words is, although probably accidental, very illustrative.
It is astonishing, how living beings, when acting in groups, tend to replicate themselves on a larger scale. A company or a country can be seen as having a digestive system (economics), a fighting system (the military), an immune system (police), the nervous system (the government), as well as, obviously, a thinking system. To my own disgrace, I used to consider the academia to be this thinking system, however, after reading Heuer's book, I am much more sceptical about this attribution The Intelligence agencies, as well as political parties, seem to be much more akin to the "thinking subsystem" than the ivory tower people disconnected from the empirical world.
The Central Intelligence Agency is, perhaps, the most well-known thinking body in the world. It is not that being famous is a benefit for a secret service, but I guess they could not have avoided that. In the seventies they became so big that they had to develop and internal self-reflection mechanism, which eventually became the "Institute for Study of Intelligence".
The institute had a huge role in the CIA's internal reform, which in turn led to a ( self-proclaimed) huge boost in efficiency.
This book, which is written by a prominent player in this reform, summarises several observations, which are supported by quite a solid (for social sciences) scientific base, aimed at precisely identifying what exactly is the process of (human) thinking, where it fails and how it can be improved.
Intelligence is, first an foremost, bureaucracy. I know, it seems like quite an obvious thing to say, but I'm always impressed by how much exactly I underestimate seemingly obvious things. I had to interpolate quite a huge cognitive gap between myself and the author by imagining hordes of CIA clerks writing thousands of pages of… can I call them reports? Maybe the word "encyclopedias" would have been a better fit?
Imagine a government official needing to make a decision. Nobody, even the smartest one can decide independently on every issue that arises. Moreover, it is probably impossible to even select a subset of issues and to research those carefully, since the urgency of the issues is not directly related to their importance, some (most?) are pushed in by the political agenda, which cannot be just ignored.
Therefore a super ignorant politician is in a desperate need of something that he can ask for "what shall we do with this super urgent thing that I have no idea about". God Bless Google and Wikipedia for being able to deliver context to the executives. But even having the context is not enough, because after the context is established, the politician is very likely to ask "what happens if we do X in this context Y?". And this is a question Wikipedia cannot answer. Moreover, it is probably just as unlikely to be able to produce an high quality answer to such a question on-the-spot, because answering such questions is usually a slow job. Therefore, there needs to be someone who "pre-caches" answers to such questions.
My imagination already paints immense cupboards filled with identically bound books, only differing by the titles on their spines: "What shall we do if North Korea invades South Korea in 2010 (2011, 2012, 2013, 2014… )", all very similar to each other, but ready to be placed at the U.S. president's (or, indeed a Chinese general secretary) desk in case the event occurs.
Essentially you have writers whose imagination is too poor to entice the readers, or journalists whose writing style is too dry, writing almost identical texts on the almost identical subject, day after day, year after year. Such extremely boring people are called "intelligence analysts".
There are also people who's job is to invent questions to ask them, as well as people who's job is to assess where the reports they are making bear as much resemblance to reality as possible.
If all of this sounds an awful lot like a very sad version of cosplaying Google – that's because it is.
So, the intelligence people have invented Google a long time before Google. And Google's question-answering system is, perhaps, the closest approximation to the "artificial intelligence" that we may think of nowadays. This leads us to the question of what exactly is artificial intelligence?
In fact this thought hasn't left me all the time that I have been reading this book. This book is almost like writing an interpreter for a peculiar (military-styled) query language, in an peculiar (psychologically-styled) language, producing reports in a peculiar (politically-styled) language.
And a language interpreter essentially consists of a database (and the books speaks about the knowledge representation in human brains), and a set of subroutines that operate on these data. Just that simple.
Naturally, as every programmer knows, machine primitives available to the programmer are far from being ideal. Most programmers are used to integer overflows when adding numbers. Far less people are prepared for cognitive biases in humans when writing instructions, manuals and guidelines for people.
Indeed the book deals quite a lot with these two topics: how to ensure proper data management in an analyst's brain, and how to program analysts in order for their reports to be written taking into account the limitations of the human brain's thinking subroutines.
I will not write a lot about the guidelines themselves. After all, the book is all about that, and I have prepared a list of key points for myself, in a separate document. ( https://gitlab.com/Lockywolf/linuxbugs-lwf/-/blob/master/notes/2020-09-03_Richards-J-Heuer-Jr_Psychology-of-Intelligence-Analysis/2020-09-03_richards-j-heuer-jr_psychology-of-intelligence-analysis_notes.org)
But I need to give points to the author for writing a book in a way that is pleasant for reading. Chapters are well structures, every section has a summary prepended to the main text. (I revised the book content after reading by doing a second pass and reading summaries only.) There are several problems given, which illustrate some of the book concepts on the reader himself. Nice!
In general, however, the three most general advices the book gives would be:
One of the best books on the skill of thinking. I would definitely recommend it as a high school textbook for kids, as a gentle introduction into how messy the thinking process actually is. I will probably also read it later again, for a recap and to spot the unnoticed jewels.
The only drawback being that this book is already 20 years old, perhaps there is something more advanced already? Suggestions welcome.
I also have:
People have sex in various ways. The most obvious way is to do it the way we all know. However, at times this is not the best option.
This memo is about things we can do instead.
Men and women sometimes have sex, and sometimes cannot have sex
.
The reasons may be different, from having a lack of security to being unable medically.
The need
for having sex still remains
even if the most obvious solution is not the most viable option.
Unless satisfied
, the need for sex negatively
impacts work performance, social interactions, mental health, among other things.
This memo explores various surrogates
.
What is having sex?
Answering this question substantially would require a lot of research in psychology, sociology, and physiology, which I am unable
to conduct.
I think
that sex is (a) form of communication, (b) requires two entities of opposite sexes.
The working definition adopted in this document will be: "complementary cooperation"
.
Types of cooperation. I divide cooperation into complementary and substitutive.
Substitutive cooperation
is a type of cooperation you find in a case when a job needs to be done by several agents, maybe playing different roles, but these roles are generic
enough so that cooperating agents can switch roles.Complementary cooperation
requires that the roles be so different
that they cannot be easily switched.
One actor can compliment
the actions undertaken by another actor.
This classification is not binary, but rather a continuous
spectrum.
Examples:
Example 1
: a game of tennis.Both players are doing exactly the same job, and are completely replaceable. This is a very substitutive cooperation.
Example 2
: driving a car with a paper-based map.In this example, one person is driving the car, another person is finding the route on the map. If both people can drive and both people understand the language the map is written in, this is a substitutive cooperation. If someone can drive, and the other one can read the map, this cooperation becomes much more complementary.
Example 3
: a father is registering a child for school classes.This is not a cooperation at all. Even though classes cannot happen without the signature and the student, this paired activity is not cooperative.
Example 4
: a massage.An example of a semi-substitutive cooperation. One can play the role of the other, but not vice versa.
Sex surrogates. So, for sex surrogates we want to find cooperative activities that are as complementary as possible, but are still not classified as explicitly sexual. This document attempts to create of several such activities. Not all of them are equally good, but pull requests welcome. Some of these surrogates will be just dating ideas.
Clarity Some of this activities can be forced to be substitutive, and remain complementary only as long as both partners are happy keeping them this way. This is a drawback, but a minor one.
For example, one player can specialise in driving cars, and the other one in shooting or whatever. 2vsW paradigm lets people feel the sense of shared interest. The drawback is that sometimes the contribution is hugely unequal (i.e. when one can play well, and the other one can’t).
These are 2vsM games. The contribution is always equal, but the specialisation is often not that big. No living human enemies.
One of the best surrogates. Sex roles are almost never switched. There is a shared goal.
This is the aforementioned case of “one drives, the other one navigates”. Not so cool in the age of GPS.
We write a research work together. You propose a theory, I measure the data. You clean the data, I write the data analysis code.
An excellent choice. If you can play anything.
Mentioned above. Not a very good option actually.
If you can lift anyone.
These are that kinds of sports that can be done alone, but are just much better together. Stretching is one of the example, when your own muscles are just not convenient enough.
Don’t let uncooperative people edit your text, hence high cooperativeness. Complimentativeness is arguable.
The drawback is that this is a very hard thing, so feels more like a job than a leisure. Still, can help you placate the sexual instinct if no other options present.
Despite the low scores, one of the best feelings in the world. If you both know programming though. One person is writing the code, the other one is commenting and suggesting improvements, and spotting mistakes.
Works especially well if you have spaghetti code. Unfortunately, can be hugely imbalanced. Also becomes less sexy the better programmers you become.
A better version of the “driving together”. The stronger partner is probably better at the back seat.
Not an activity you can easily invite someone to do, but if you share a common fondness of some author, it may work.
This entry is for the activities that are either not that good, or are too explicitly sexual, but it’s still worth mentioning them.