For (1), since you're fetching from trusted sources, and you're quite sure the output will be valid JSON, you can just eval() it.
Quite important: you're fetching and parsing the page for every category, but you only need to do it once per wiki. This obviously makes it run about 6 times as fast.
As for errors, if you weren't sure, urlopen gives IOError when the page is dead (or maybe it does so on .read()), and the API will give errors in the returned data structure (in the 'error' key, I think?). Checking a page is valid JSON kind of conflicts with what I said for (1), so maybe you could have a hack like checking for fetched_string.startswith('{"query":') ?
(...And I'm struggling to refrain from complaining about your HTML - I'm telling myself it doesn't matter.)
Edit: looking at the difference between this table each month might be a better way of getting the monthly edits, too.