There are a lot of people trying to use pagerank for academic journals, but so far it hasn't worked well for various reasons.
Part of the problem is that the metaphor breaks down: a paper is like an individual webpage, but a journal is like a company -- it has a much longer time-line, and its impact varies over time. Also, unlike web links, citations don't go away; they just accumulate over time. Since the point of these citation metrics are to rate the journals (and maybe the scientists), pagerank has some difficulties in the domain. It works better for ranking individual papers than for scientists or their journals.
This shouldn't be too surprising: TechCrunch (for example) probably has a good rank on many pages, but pagerank doesn't tell us anything about Michael Arrington's reputation.
But we're not talking about ranking journals. We're talking about ranking authors. JIF if a reasonable metric for journals, the problem is that it's used to rate authors: what's the JIF of journals you publish in?
The metric presented here is much better for rating authors because it gives more of an author's peers an opportunity to vouch for him by citing his work, as opposed to only a small editorial board and review committee who decide if he gets into TopJournalX.
Adding a pagerank-style coefficient (increasing the weight of citations that come from well-cited papers) would make this metric even better for precisely the reason you state: papers exist in perpetuity. If I write a paper now but it is ignored for 50 years, then someone builds upon that to break ground in an entirely new field, then I deserve some indirect credit for that. The journal I published in does not.
Empirically, pagerank hasn't been very successful at ranking authors for the reasons I mentioned, along with other complications (e.g. papers have multiple authors).
But more importantly, you're confusing impact factor with peer review. Peer review decisions are double-blind, and impact factor doesn't play a role (shouldn't, anyway). Papers don't get published in Science and Nature based upon the authors' impact factors.
"There are a lot of people trying to use pagerank for academic journals, but so far it hasn't worked well for various reasons."
Apart from eigenfactor.org, what other examples do you know of?
I'm not aware of anyone using PageRank for individual articles. (I know this isn't what you were referring to in your comment).
I'd be interested to know what algorithm Google Scholar uses to compute its rankings. The rankings it returns seem to be pretty close to pure citation counts, with some minor variations, which could potentially be explained as being due to some sort of relevancy of the hit to the query.
Reading past the usual academic exaggeration (where everything is "promising" and "has potential"), the data is underwhelming -- there's no clear indication that pagerank has an advantage over traditional citation metrics.
Here's a Google cache link to a paper that discusses some of the things I was talking about (i.e. how the metaphor breaks down when moving from web to journals):
Part of the problem is that the metaphor breaks down: a paper is like an individual webpage, but a journal is like a company -- it has a much longer time-line, and its impact varies over time. Also, unlike web links, citations don't go away; they just accumulate over time. Since the point of these citation metrics are to rate the journals (and maybe the scientists), pagerank has some difficulties in the domain. It works better for ranking individual papers than for scientists or their journals.
This shouldn't be too surprising: TechCrunch (for example) probably has a good rank on many pages, but pagerank doesn't tell us anything about Michael Arrington's reputation.