As students of computer science, our work tends to strive for utilitarian value-- We want our code to have a clear and useful purpose. In doing this, however, we often obscure or overlook what is going on behind-the-scenes in a programming language, and focus solely on how we can use a programming language to a productive end. However, in doing so I think we necessarily lose sight of some of the simple wonders of computer science: the application of binary code.
In this week's lab, we learned about ways in which binary code can represent numerical values other than 1 and 0. For instance, "10" represents "2," "11" represents "3," and "100" represents "4." In learning this, I was amazed, again, by the fact that all information, however complex, can be represented via two characters: "1" and "0". We can teach computers to complete myriad tasks with these two simple values. But how, then, did we first teach computers to handle binary code? If there was no linguistic foundation, i.e. the kind that binary code provides for all other programming languages, how did we teach computers to learn how to interpret simple code? How did we make something, i.e. binary code, out of nothing?
I'd also like to take a moment to recognize Denise Jiang's post on recursion. She provides a great analogy for a recursive function without a base case: two parallel mirrors.
No comments:
Post a Comment