本文发表在 rolia.net 枫下论坛The whole computing world started by IBM revolves around binary system, simply put, true or false. With exponentially increased computing capability, software could make millions of true or false decisions. From what I had understood it in my student years, there were several methodologies to machine learning. All of them involved “coaching” and “training” machines, which was to record as many responses as possible. Next step was to decide which response to use through very complicated computing algorisms. In simple English: probabilities. Software won’t “think”, the codes behind it always aim to draw out one close enough response based on the information you feed. The larger response database it could do data mining on, the more “intelligent” it appears. Software can be designed in such a way that it corrects its own errors and adds them to its response database, so that it appears “learning”.
Remember the chess robot which repetitively beats world champions? It has millions of moves being recorded in its memory. Fool –proof? No, once the players move illogically or “unexpectedly”, it would be short-circuited.
Therefore, software applications don’t really “think”. They won’t understand human feelings, can’t write creatively great poems or plays. But they could marvellously perform repetitive functions that involve huge computation. I guess I have bored everybody to tears.更多精彩文章及讨论,请光临枫下论坛 rolia.net
Remember the chess robot which repetitively beats world champions? It has millions of moves being recorded in its memory. Fool –proof? No, once the players move illogically or “unexpectedly”, it would be short-circuited.
Therefore, software applications don’t really “think”. They won’t understand human feelings, can’t write creatively great poems or plays. But they could marvellously perform repetitive functions that involve huge computation. I guess I have bored everybody to tears.更多精彩文章及讨论,请光临枫下论坛 rolia.net