![]() |
Larry Page & Sergey Brin contact info!
I bet they still use their good ole' email from the Standford days:
http://web.archive.org/web/199804181...d.edu/~sergey/ http://web.archive.org/web/199902101...ord.edu/~page/ Browse the profiles of current students and please tell me which ones are going to be worth more than $10 Billion when the companies they are going to create go public... |
Actually those pages have very interesting links.
Check this one please, a Larry Page post from 1996 http://www.google.com/search?q=cache...ford.edu&hl=es i love this part: "I am using the data to do clustering and some economic models of the web. I'll send mail to this list when I have my query engine up." |
:glugglug
|
Fuck, no one find this interesting? :(
I guess you rednecks care for nude pics posts only... :1orglaugh |
nice googling..
|
Interesting. Google was almost gonna be called Back Rub it seems! LOL
|
Quote:
This isnt the history channel. :winkwink: |
There's a particular link on this page: http://web.archive.org/web/199804181...d.edu/~sergey/
which i wish it could work... "PageRank, an Eigenvector based Ranking Approach for Hypertext " by Lawrence Page and Sergey Brin. This is work in progress which we intend to submit to SIGIR '98. Of course, no wonder why is disabled, but really, there are some links on those pages which explain some of the early algorithms used by Google... so, make your own conclusions |
Quote:
|
Jesus, this is simply priceless.
http://web.archive.org/web/199805020....stanford.edu/ and their first server running Google. http://web.archive.org/web/199805020...ehardware.html :glugglug Cheers. If they only knew how humongus their project would become. Fucking amazing. |
OMG Fucking funny. The original name for Google was going to be BackRub.
And this: Google is research in progress and there are only a few of us so expect some downtimes and malfunctions. This system used to be called Backrub. |
Good find
|
Quote:
Im reading all those links since the afternoon. btw, their email addresses [email protected] & [email protected] are active and DO REALLY work. I wish i had something interesting to propose to them besides asking for a big Adwords discount... |
This is fucking PRICELESS!
3. PageRank. The IB(P) metric treats all links equally. Thus, a link from the Yahoo home page counts the same as a link from some individual's home page. However, since the Yahoo home page is more important (it has a much higher IB count), it would make sense to value that link more highly. The PageRank backlink metric, IR(P), recursively defines the importance of a page to be the weighted sum of the backlinks to it. Such a metric has been found to be very useful in ranking results of user queries [Page 1998.2]. We use IR'(P) for the estimated value of IR(P) when we have only a subset of pages available. More formally, if a page has no outgoing link, we assume that it has outgoing links to every single Web page. Next, consider a page P that is pointed at by pages T1, ..., Tn. Let ci be the number of links going out of page Ti. Also, let d be a damping factor (whose intuition is given below). Then, the weighted backlink count of page P is given by IR(P) = (1-d) + d ( IR(T1)/c1 + ... + IR(Tn)/cn) This leads to one equation per Web page, with an equal number of unknowns. The equations can be solved for the IR values. They can be solved iteratively, starting with all IR values equal to 1. At each step, the new IR(P) value is computed from the old IR(Ti) values (using the equation above), until the values converge. This calculation corresponds to computing the principal eigenvector of the link matrices. PageRank is described in much greater detail in [Page 1998.2]. One intuitive model for PageRank is that we can think of a user "surfing" the Web, starting from any page, and randomly selecting from that page a link to follow. When the user reaches a page with no outlinks, he jumps to a random page. Also, when the user is on a page, there is some probability, d, that the next visited page will be completely random. This damping factor d makes sense because users will only continue clicking on one task for a finite amount of time before they go on to something unrelated. The IR(P) values we computed above give us the probability that our random surfer is at P at any given time. from: http://web.archive.org/web/200008181...crawler-paper/ |
Quote:
LOL, everyday we should all thank God that he put a few brilliant geeks on this planet. :1orglaugh :1orglaugh :1orglaugh |
^^^you're welcome^^^
|
Now, since i know little about advanced math i would like to know if that info can be used for something useful or its worthless...
|
that is really amazing.. great post man :thumbsup
|
Quote:
|
Quote:
|
Looks like they should have had a GFY Logo contest.
http://web.archive.org/web/199805020...edu/google.gif :1orglaugh |
All times are GMT -7. The time now is 02:09 AM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123