That's a good intuition. Generally peer review is a little simpler than that. It just means 3 people read it and agree that it should be recommended for publication because it's interesting and seems to make sense either logically or mathematically, depending on the type of study. There's also typically an editor involved if there's a journal it's being published in (e.g. The Lancet, Nature, Science) and they have the ability to desk reject or override a rejection. The 3 readers are more or less selected from a pool of whoever is available to the journal that's published in it before and seems to have written on similar topics.
It's a hard process to get through and is where many papers go to die or to be revised and resubmitted until they satisfy Reviewer #2's questions/problems. In a situation like the one we're in, there are institutional studies being conducted alongside academic studies for publication. Institutions such as research centers often put out reports. The VA is one such institution that would release an institutional report and then the individual study authors could submit their findings to a journal for peer review, which takes anywhere from 1-6 months depending on the journal. Faster now likely for COVID related stuff, but there's a glut of it so institutional reports can come out on a much faster timeline. Institutions also typically have their own internal peer review process so they don't just fire off anything, but it's always going to be a bit more susceptible to blind spots.
What there definitely are not are a variety of different people from multiple fields reviewing papers in any case. Fields are much too siloed for that. It's usually specialists only. In terms of how they make studies public, that usually has to do with a combination of institutional or Journal PR folks and their translations of the findings to press and the study authors own ability to effectively communicate what their work suggests. In a time like this, studies are open to a lot of public scrutiny and can fall into disfavor quickly, even while just being on preprint servers (places where any scholar can upload stuff before it's reviewed). In preprint, theoretically anyone can come and trash your study, which is actually useful because a chemist or statistician may have insights an epidemiologist isn't so great at.
There is a lot of arbitrariness to the peer review process, but your inclination to find it generally rigorous is on point. It tends to reward precaution and making only very limited claims about anything because of the belief that something is definitely false unless it stands up to multiple trials and studies demonstrating that there might probably under these set circumstances be an effect.
That's why the first burden of proof isn't on the anti-malarial drugs effectiveness, but on whether they can do harm, whom they might effect differently and how they might do harm in a way that is measurable and able to be generalized across populations. That's generally what we need to address any medical problem, though the exigencies of the moment definitely incentivize haste. The peer review process isn't meant to accommodate this haste which is why researchers have often used alternate publishing methods. It's not unusual. And it's also generally more acceptable when identifying risks as opposed to positive effects.