This is partially rambling about things on my mind because drink (Wow, drunk for two posts in a row on the subforum... great start)
Considering it further I'd imagine it would heavily rely on precomputed knowledge. Consider that the term "cube" would represent a well defined mathematical concept of a 3D object containing only 90 degree angles with equal length sides. Although given that, you could abstract to a level to say that a pattern of X belongs to a concept of Y, which is further explained by Z. At some level you would need to determine if a particular level of abstraction is useful (e.g. "Shape", "Cube"); this is partially arbitrary beyond terminology which defines their logical boundaries. e.g. A "rectangle" contains the subset of "square", but it's logically impossible for every "rectangle" to be contained within the set of "squares", there's a logical line between them; however, "table"* is partially subjective as there's no logical boundaries to the definition.
*You could theoretically define a table to it's logical limits of being a parallel to it's base (as part of the definition) but humans would consider things a few degrees off as a table, furthermore even a 45degree surface above base level could be considered a table if objects were designed to adhere to it in some manner. When taking such considerations into account, essentially you consider any surface raised above it's base to be a table or only parallel. Point being: most things in language don't have a logical definition.
Really, I guess you'd have to consider human language to really define anything via computers. If a human understands it, then technically you can consider it defined to enough detail (Beyond logical restraints).
This partially leads onto another topic I been meaning to post:
In my mind it's inevitable that human language interpretation becomes a middleware industry. Essentially such a middleware would map human interactions with a defined set of goals. It's rather complicated when you considering asking the question of "is a particular merger going to raise net profits within X amount of time", for humans it's immediately obvious that when a huge conglomerate overtakes a fledgling company, that it's chances of success dramatically increase, but quantifying it would essentially require a computer to quantify everything. However, like how method of trimming the probability tree is the most important thing in solving any particular game using game theory... you could heuristically calculate what level of calculation in the probability tree can be obtained given a particular set of calculation resources.
Given that human interaction is relatively fixed in terms of scope, it's inevitable that computers would excel in computational ability. Although given that human interaction isn't logical (taken to it's logical extremes you'd have to consider quantum theories and essentially the question of can the future be perfectly predicted given as much knowledge as possible, but I digress), yet you can define it within "useful" logical constraints.
Anyyyywayyy... I wonder how long it will be be before companies start offering middleware to map arbitrary human interactions with defined goals.
tbh, I've thought way too long about it to condense into a single post, let alone while (now sobering due to time taken to post) partially drunk. I'm gonna post this anyway, despite the fact that this would usually be relegated to unposted things. I must hold some kind of record for time spent writing posts which just get deleted at the end *existential sigh*