Rconnor - you have presented a textbook case of argumentum ad ignorantiam. Since "it" can't be a few things that we know, it obviously must be our pet theory. Natural cycles? Internal variability? Well, we don't understand that, so obviously it can't be that. Puleeze!
Now, to your asinine suggestion that, although we may know models and even computational models, because we don't know climate models we are singularly unqualified to proffer a learned opinion on said models. What a load of codswallop! I have been using computational/numerical models in the style of finite element / finite volume / and finite difference for 20 years, and have almost 20 papers in those topic areas to my credit. Damn right I know a thing or two about "models", and it matters not what is going on in the element/volume, there are certainly some universal truths:
1) boundary conditions: all models are sensitive to their boundary conditions. For climate models, that means what's happening at the edges of the model. Any textbook description of the atmosphere shows a huge variation in temperature as a function of height (and the height being a function of latitude. Albedo is a boundary condition that is a slight function of the near-surface temperature (and geography and geology).
2) Initial conditions: our current climatological data is so spatially and temporally heterogeneous that setting proper initial conditions sufficiently far in the past so as to train or tune the model to match recent history is a fool's errand. Ergo, and training or tuning of the model to match historical conditions is not and cannot be physics-based.
3) discretization and discretization error: the volume size in the current models are woefully inadequate to resolve spatial and temporally-significant weather and climate (climate being merely the time and spatial-integral of weather) phenomenon. I have lived and travelled in some pretty diverse places, and I can say categorically that the spatial grid-size is poor. I've also done numerical simulations (CFD, in this case) where we are trying to simulate phenomenon such as shock waves. The grid size is everything. Have you seen such presentations of upper-level winds such as
the features shown (and this is actual data, not a simulation) are very important climactically-speaking and yet the grid-size necessary to resolve such details is at least an order or magnitude greater than what the current generation of models have.
4) volume or element formulation. I have done simulations where there are more than 20 variables per grid (including some that had three distinct temperature metrics - plasmas are a blast to model, BTW). Within a single grid, you can model only the most simple physics. Since the resolution of the current climatological models is so coarse, they try to cram all sorts of extras into each grid. Been there - done that - and it's a fool's errand.
5) Validation: this is something that has been hammered home to me so many times by professors and mentors. Does your model match an experiment or reality? Well, the divergence of the atmospheric temperatures during this long "pause" between the real world and the model world shows that validation is not yet achieved. And this failure of validation is likely due to the above-noted issues.
Now - I agree that the CO2-temperature hypothesis does not need these sorts of models. However, claims of forthcoming catastrophe most certainly do. In fact, everything in this topic that is forward-looking relies on "the models". Without catastrophe, there is no need to "act". I am certainly willing to admit that my philosophical and political leanings bias me against the proposed "actions" required to "save the planet", but being sufficiently self-aware, I also know that my technical understanding of this topic is not clouded by my pre-existing biases. Can you say the same?