It seems that you (like a great number of other people) assume that an ordinary person without special skills cannot comprehend what some code does once it is compiled. But that assumption is fortunately wrong. It is easier than you think to learn to read disassembly of machine code. It would be much easier to read higher level bytecode such as Android VM's smali.
So don't fear compilation. What is truly worrying is any attempt to restrict access to code that is run by a computer. Even a highly skilled reverse engineer cannot do anything if there is no way to access code.
Even though my intuition tells me that I should support neurodiversity my reasoning keeps failing to justify it without introducing "diversity is unconditionally good" as an axiom. I find it difficult to add it to the list of axioms I support because it seems like hypocrasy to endorse diversity considering the history of humans.
Almost every human has the ability to learn and speak language, which is unique to mankind. (What media often reports as an animal language is not a language in that it doesn't have recursion -- the ability to handle it is, Chomsky and his supporters believe, unique to humans)
But shortly after language was born, in the very first stage of evolution of language, there must have been significant percentage of people who could not learn language. Where did they go? The answer is: they went extinct, failing to reproduce. And that's why we all can learn and speak language. The ancestors of us are those who could speak it. By making it difficult to reproduce for those who couldn't speak language, through the process of natural selection, we managed to build society where almost all of the members can speak it. Having autism in this era is analogous to being non-verbal in the early stage of humans.
With that said, endorsing diversity seems to be denial of evolution to me, denial of how we have come this far. It is by putting selection pressure on those who cannot adapt to society. And the sad reality is, you cannot stop it from happening. Autistic people will go extinct, even without the gene-editing technology, just like non-verbal people went extinct.
Why do autistic people have to go extinct? Can't we support them in this wealthy day and age? Couldn't great cognitive tools lie beneath autism and other supposed illnesses? In my experience, "normal" people are the most boring because they can't think in (what they would consider) contorted ways.
There's a whole spectrum of people, one end which defines large parts of society, the plain average. On the other end are people whose minds are so distorted relative to the norm that it's impossible for them to function in our society. I would put the blame for that on society not being diverse or open enough.
It seems to me that evolution works because of diversity: an organism diversifies through mutations to the point where one type gets an advantage over the others and survives. Think of it this way: If you limit yourself to what you already know, you'll be turning in circles, constantly reaching the same conclusions for the same problems. Let some outside knowledge/factors mess things up and you'll be able to move on.
Once I tried to figure out how to parse complex C declarations just by reading the specification (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf), that is, without consulting guides for layman like this. But I gave up. I looked at what seemed a BNF-like description of the C grammar but I had no idea what it tells about the parsing rules. So I ended up using this guide: http://ieng9.ucsd.edu/~cs30x/rt_lt.rule.html With this I managed to implement an imitation of cdecl.
It might be the right thing to do to torture, to add more pain to, someone who already feels so much pain that she attempts suicide, by some standard that I cannot comprehend. We might have to accept such a standard as the cost of diversity.
However it seems obvious to me that it is impossible for any sound-thinking person to argue that it is justifiable to obstruct suicide attempts. One of the values upon which capitalism society is based is the notion of private property, which entails that we have exclusive control of things we own. If it is justifiable to obstruct suicide attempts, that would imply that we do not own our bodies. Well, there's a name for such people: slaves.
It's just funny that it is still called "natural selection" even when the environment that selects individuals is created by nothing but ourselves. I think it'd be much more accurate to call it "social selection" because for humans the environment is not nature, but society.
With that in your mind, those of you who are clever might wonder: what is the fundamental difference between "social selection" and eugenics, which many consider morally wrong? My answer is: they are essentially the same thing. It's because if we define eugenics as society deciding who to reproduce and who not to, that's exactly what is happening now.
I will not argue the common usage meanings that have come about for 'natural' and 'artificial' for things like plastic flowers or street lamps, but I look at 'ourselves', or humankind as part of nature, and not supernatural, or apart from it. I am not sure when this distinction or usage arose, but 'artificial' or using 'societal' here seems self-hating, and I don't think the term 'natural selection' is archaic or exclusive in this sense.
The same reason I find cell towers made to look like trees an eyesore, and I much prefer the 'natural' shape a truss tower takes with its wide supportive base, diagonals and ever-decreasing thickness to the spire. I personally find these more aesthetically appealing.
Light from 'artificial' light bulbs is made of photos regardless of color temperature and the time of day it is turned on. I don't like plastic flowers, although plastic is also found in nature, but just not as structured, homogenized, or in the quantities we produce it.
I agree with you on the 'social selection' and eugenics bit. My Mom had bought me a hardcover 'Eugenics' textbook from an old bookshop originally published in England in the 20s. Scary content given what occurred 20 years later, but it was humorous to me when reading it in 1976, especially the chapter on 'The Honeymoon' and how to have the proper one with attendant drawings! I was twelve at the time.
There is a fundamental difference. Natural Selection works on an individual basis, everyone tries to maximize the amount of his or her DNA in the world. In eugenics (and to a lesser degree in social selection) someone else (or the group) decides that his or her DNA should not be inheritet. Basically capitalism vs. communism.
In sexual selection there is also someone else (i.e. the group of females in a peacock population) that decide whose DNA should not be inherited. And sexual selection is considered a part of natural selection. So your argument does not hold.
Wikipedia:
Sexual selection is a mode of natural selection where members of one biological sex choose mates of the other sex to mate with
I could even argue the same thing for natural selection itself. When a predator kills an animal, he decides that the DNA of that animal should not be spread further in the future.
Extracting machine-understandable meaning from web pages is much analogous to extracting text from images.
Fortunately, we usually don't need to process web pages using fancy yet hardly accurate algorithms in order to extract machine-readable text from web pages. Why? It's because we agreed to use character codes to codify letters and most of the time text is encoded using some character code, which makes it unnecessary to OCR pictures of hand-written letters to programatically process text from web pages.
These kinds of program wouldn't be needed if only the same thing had happened for page structures, if HTTP included page semantics.
The issue with creating tech for such semantics is whether authors put in the effort to provide metadata. For example, rel=next/previous has been around forever but most webpages don't have them because they are not exposed in browsers or other clients. Other data mentioned in the examples like title and open graph tags are provided for search engines, Facebook previews, and such.
But is Fathom supposed to be for annotating your own web site, or analyzing others'? If the former, then I'm truly bewildered, but it's not clear which it is.
As for rel=next/previous, I don't use them because Google makes it clear that these will make it treat the whole sequence like one paginated document in the index, contrary to their original semantics. I'd love if someone could correct me on this.
I suspect it's against the website's interests. If you provide semantic marks, it makes it easier to crawl your website, extract the actual content, and leave the ads behind.
Well, yes, only then everyone realized it takes you another good 2-3x of the work over and above writing text to put it into a form which is "machine-understandable" with a whole bunch of metadata and requires the people writing the text to be familiar with all of that and have a good idea what is "machine-understandable" and what is not, for..
zero gain. There is just no application. Google works perfectly fine. Give it up already.
And just to take this to another tangent.. Word won the office space. There are no normal people writing LaTeX for their party invitation. And hell, even the people writing LaTeX don't want to bother with the "machine-understandable" thing so they went ahead and made it into a proper Turing complete programming language.
I think the solution is smarter document editors. Auto correct / suggest with context awareness, database integrations, machine learning / basic AI, and so on.
Even simple stuff like asking the writer to clarify which piece of a text that a written date applies to would be helpful, and to define what parts of long sentences with many commas belong together (which subsentences are interjections, for example?).
I meant HTTP but I have to admit it was not clear. What I had in my mind when I wrote that was there needs to be something that forces people to provide semantics of web pages. By which I mean HTTP is too liberal, allowing any kind of document including documents without semantic annotations to be transferred. Therefore I thought people would provide page semantics if HTTP required documents be annotated with semantics just like HTTP requires Content-Type be given.
Ah, I see, that makes sense. The protocol level does seem like a good way to enforce it. Though I do have to wonder if, had that enforcement been put in place, people would have moved away from HTTP and toward a different, looser protocol on top of TCP. Or maybe that wouldn't have been practical. It's interesting to think how early, seemingly low level decisions about protocol design can have a profound effect on how things develop down the road.
You can read Wikipedia articles without letting them know which article you're reading by downloading the database dump of Wikipedia, which can be obtained at dumps.wikimedia.org.
Google developed SPDY, an efficient binary representation of HTTP messages. Maybe they will do the same thing but for HTML. It would be much more efficient if one could design a binary representation of HTML that can only express well-formed HTML.
It'd be great if it were implemented as a peer-to-peer protocol as it'd make taking the service down much more difficult.
EDIT:
Although making the service completely peer-to-peer might be impossible, it is possible to distribute articles in a peer-to-peer way. I'm considering an architecture which consists of a peer-to-peer network to share articles, and servers to download articles and put them on the network. In this architecture, even if all the servers got taken down at least the articles which have been shared would be accessible provided that the peer-to-peer network is alive.
Sci-Hub and other open access advocacy things are practically the ideal use case for IPFS. One and done. Whoever implements it, you're welcome for the idea.
Especially with Internet Archive involvement, IPFS should be quite compelling these days, even in its infancy.
I suppose not. I presume the way would be vulnerabilities in code handling untrusted data streams such as USB, Wi-Fi, or BlueTooth. Although the amount of time it takes might be getting longer, every major iOS update eventually has had jailbreaks, which implies for an organization with larger budget than jailbreak hackers would likely be able to discover 0-day vulnerabilities in iOS.
So don't fear compilation. What is truly worrying is any attempt to restrict access to code that is run by a computer. Even a highly skilled reverse engineer cannot do anything if there is no way to access code.