UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).
On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.
I’ve recently come to appreciate the “refactor the code while you write it” and “keep possible future changes in mind” ideas more and more. I think it really increases the probability that the system can live on instead of becoming obsolete.
Unicode is thoroughly underrated.
UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).
On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.
I’ve recently come to appreciate the “refactor the code while you write it” and “keep possible future changes in mind” ideas more and more. I think it really increases the probability that the system can live on instead of becoming obsolete.
Yes, but once code becomes too spaghetti such that a “refactor while you write it” becomes too time intensive and error prone, it’s already too late.
Relate, but what do you think (if anything) might end up being used by the last remaining reserved bit in IP packet header flags?
https://en.wikipedia.org/wiki/Evil_bit
https://en.wikipedia.org/wiki/Internet_Protocol_version_4#Header