Now, with the sort of 20/20 hindsight that is always in abundance in such cases, we can see that there were other ways it could have been done. For example, instead of using the various tricks to keep the first image-ALT link from printing, we could simply have kept a count of the good links seen so far in the current stream and ignored the first one. Our actual solution is more proper in this case, but sometimes counting items is the best or only way to get a problem solved.
More importantly, we could have done without all the code that tests the link URL and used one regexp to implement our last criterion, i.e., that the link text begin with "Listen to". But, as with our earlier consideration of how much of the URL to check, it comes down to the question: do you want something that's more careful (i.e., enforcing more assumptions on the input data, and so more prone to reject appropriate links in the future) or more forgiving (i.e., enforcing fewer assumptions, but more likely to match inappropriate links in the future)?
The answer depends on how concise you want the code to be, how much time you want to spend thinking up assumptions, and, most importantly, what happens if it breaks. If I've crontabbed this program to harvest Fresh Air listings every month and mail me the results, if it breaks, I'll get some sort of anomalous output mailed to me (whether with too few links, or too many) and it's no big deal because, working or not, it's just so I can listen to interesting radio programs. But your data extraction program may instead serve many people who will be greatly inconvenienced if it stops working properly. You have to decide on a case-by-case basis whether your program should be more likely to clam up and miss interesting data in new formats, or pass through new kinds of data despite the risk that they might be irrelevant or just plain wrong.