Textos


My own prejudice is in favour of there being a simple algorithm for intelligence. And the main reason I like the idea, above and beyond the (inconclusive) arguments above, is that it’s an optimistic idea. When it comes to research, an unjustified optimism is often more productive than a seemingly better justified pessimism, for an optimist has the courage to set out and try new things. That’s the path to discovery, even if what is discovered is perhaps not what was originally hoped. A pessimist may be more “correct” in some narrow sense, but will discover less than the optimist. This point of view is in stark contrast to the way we usually judge ideas: by attempting to figure out whether they are right or wrong. That’s a sensible strategy for dealing with the routine minutiae of day-to-day research. But it can be the wrong way of judging a big, bold idea, the sort of idea that defines an entire research program. Sometimes, we have only weak evidence about whether such an idea is correct or not. We can meekly refuse to follow the idea, instead spending all our time squinting at the available evidence, trying to discern what’s true. Or we can accept that no-one yet knows, and instead work hard on developing the big, bold idea, in the understanding that while we have no guarantee of success, it is only thus that our understanding advances.

(Michael Nielsen - Neural networks and deep learning, 2015)


There are and can only be two ways of investigating and discovering truth. The one rushes up from the sense and particulars to axioms of the highest generality and, from these principles and their indubitable truth, goes on to infer and discover middle axioms; and this is the way in current use. The other way draws axioms from the sense and particulars by climbing steadily and by degrees so that it reaches the ones of highest generality last of all; and this is the true but still untrodden way

(Francis Bacon, Novum Organum, 1620)


These Thoughts, my dear Friend, are many of them crude and hasty, and if I were merely ambitious of acquiring some Reputation in Philosophy, I ought to keep them by me, ’till corrected and improved by Time and farther Experience. But since even short Hints, and imperfect Experiments in any new Branch of Science, being communicated, have oftentimes a good Effect, in exciting the attention of the Ingenious to the Subject, and so becoming the Occasion of more exact disquisitions (as I before observed) and more compleat Discoveries, you are at Liberty to communicate this Paper to whom you please; it being of more Importance that Knowledge should increase, than that your Friend should be thought an accurate Philosopher.

(Carta de Benjamin Franklin a Peter Collinson, 1753)


I think I probably have two high level “complaints” about the program this year. First, I feel like we’re seeing more and more “I downloaded blah blah blah data and trained a model using entirely standard features to predict something and it kind of worked” papers. I apologize if I’ve just described your paper, but these papers really rub me the wrong way. I feel like I just don’t learn anything from them: we already know that machine learning works surprisingly well and I don’t really need more evidence of that. Now, if my sentence described your paper, but your paper additionally had a really interesting analysis that helps us understand something about language, then you rock! Second, I saw a lot of presentations were speakers were somewhat embarassingly unaware of very prominent very relevant prior work. (In none of these cases was the prior work my own: it was work that’s much more famous.) Sometimes the papers were cited (and it was more of a “why didn’t you compare against that” issue) but very frequently they were not. Obviously not everyone knows about all papers, but I recognized this even for papers that aren’t even close to my area.

(Hal Daumé III, NAACL 2010 Retrospective, 2010)


A student proves what I feel is quite a nice result but the student laments that the proof was easy, they had simply put together various tools from earlier papers. Guess what? That’s called research. Every mathematical result is simply of combination of logical statements put together in the right way. But unless P=NP we cannot automate this process. Our job is to figure out how to combine results and techniques we already know to prove things we didn’t know before.

(Lance Fortnow, Putting the Pieces Together, 2006)


Guillermo Moncecchi
Guillermo Moncecchi
Profesor Adjunto