Early on, I knew I was going to program computers when I grew up. For our sixth grade graduation, our class sang a song listing the careers we’d have when we grew up, and my poor music teacher had to stuff “computer programmer” into the lyrics. I spent time reading books like Isaac Asimov’s I, Robot and a book series about a group called the AI Gang. In these books, robots interacted with humans, and had some manner of intelligence. In the more interesting of the Asimov stories, the robots had some understanding of their own existence, and of how important it was to be aware that they existed. I was certain that by the time I grew up, I’d be working on thinking computers, either building the first ones, or dramatically expanding what a robot could do or understand.
Eventually I realized that the field of artificial intelligence is in a very rudimentary state, at least as contrasted with the idea of self-awareness. (Self-awareness and what that means could be a very long blog entry in and of itself. . . neat topic to grapple with). Working in the field of AI would mean long hours of research with very little reward, as measured against the end goal. So, I bagged the idea of AI work, and instead enjoyed the fruits of systems development and software construction work.
My views on AI have shifted- I no longer believe that truly intelligent computers will ever exist. God blessed man with a gift, and I don’t believe it will ever be in man’s power to create a computer with that same capability (note that man was thrown out of Eden for eating from the tree of knowledge). But I do think that in pursuing the boundaries of what we can do, we better appreciate and wonder at the things we will never be able to do.
In that vein, two projects have caught my attention lately. One’s called A.L.I.C.E. . It’s an open-source markup language and bot engine that allows folks to create a free natural language artificial intelligence chat robot. In other words, a computer you can talk with and that would respond appropriately. (Note that I don’t say intelligently, as it has no true understanding, per se, of the conversation.) Wow! Theoretically, in addition to giving appropriate conversational responses, you could tie in system triggers that might even be parameterized with information given from the conversation. So you could tell the computer something, using conversational language, and have it react and cause other things to occur. Have it mine the conversations and their results, and now it has more information with which to inform future conversations. The computer wouldn’t be self-aware, but its future reactions could learn from previous ones.
The second project is run out of the National Library of Medicine, which runs all sorts of neat projects. The specific project is called WebMIRS. It’s basically a tool for accessing certains sets of medical survey data. Pretty basic data access application, but it has some exciting future goals. Essentially, the folks at NLM are interested in having the application recognize various medically interesting things, such as fused vertebrae or vertebrae with bone spurs, by evaluating the image data in X-rays. So, I could type in a query like, “return all data where the spine has some contusion in vertebrae 4” and the computer would translate that query request into some evaluation of the image data. The human brain makes some sort of qualitative judgement, comparing what it knows of what contusions look like on vertebraes with the picture it’s examining now. But how do we tell a computer to make such a recognition?? We’d be teaching a computer to translate the bits and bytes that make up the image into some picture of what a particular vertebra looks like, and then telling it to compare it to what contusioned vertebrae generally looks like – to have some understanding of the contents and context of a picture. Wow!
Exciting stuff! And all too much for my tired brain to handle right now. . . My own system’s going to retreat to bed and run whatever screensaver/dream that’s currently queued up for me.