Hi, Jerry.  Here's how I respond to the parts that directly concern me.

note 2: Weak AI & Strong AI

The AI thesis is Machines can be intelligent, or think.   If the Weak AI thesis is Machines can act intelligently but can't really be intelligent, or think (as Searle would have it), then so-called "Weak AI" embeds the denial of the AI thesis.  Its second clause just is that denial.  That's what I mean by "weak AI is no AI" ... perhaps I should have added, "philosophically speaking".  I never intended any such argument from the meaning of "artificial" as you seemed to think I did: as you point out, an artificial X ain't necessariliy an X.  I knew that.

You invoke the AI research community's adoption of the "strong AI" "weak AI" terminology against me as evidence of its aptness to the concerns of the community and acceptability to the community ... fair enough.  For a long time it troubled me that the community had adopted these labels to describe different sorts of AI work with different sorts of aspirations -- work with no aspirations to explain or model human thought processes being commonly styled "weak AI" and work with such aspirations being styled "strong AI" -- and I deplored this usage for giving currency to what I viewed as Searle's fallacy-inviting terminology.  However, it now seems to me that in adopting this terminology, the AI community (in its collective wisdom) has effectively put asunder what Searle had unhelpfully bundled.  "Weak AI" is commonly used in the AI research community to refer to the thesis that computers can act intelligently neat ... in what I call the Penrosean sense (after Roger Penrose who explicitly adopts this usage).  Thus understood, "weak AI" and "strong AI" do not name contrary theses (as for Searle) but compatible theses; indeed, on this usage strong AI practically entails weak AI.  Moreover, among those in the AI community who describe their work as "weak AI" or describe themselves as working in the "weak AI tradition", it seems to me that very few mean thereby to explicitly commit to outright denial of "strong AI": they're more apt to dismiss it as "premature" or "speculative" or "philosophy"; they're more agnostic than athiestic.  I have more recently come around to using the "strong" "weak" terminology in this Penrosean sense myself, as in my AI entry for the Internet Encyclopedia of Philosophy.

Note 3: Naive AI

Naive AI aside, I would say "the robot reply" -- substituting a causal theoretic approach to content for the syntactic ("conceptual role" or "inferential role") approach of Computationalism -- provides another "systematically developed and potentially persuasive" line of "argument for strong AI"; more persuasive than the pure computational line, I think.  There are, of course, outstanding difficulties with both: yet these seem, to me, the only two potentially plausible accounts of semantic content currently being developed, and both would be supportive of strong AI. 

On the othe hand ... what systematically developed and potentially persuasive theory of content argues against strong AI?  Strong AI -- holding that machine thought is possible -- is actually a very weak claim.  Conversely so called "weak AI" (by Searle), in it's anti AI part, is actually a very strong claim of impossibility!  It would seem from this that the theoretic burden of providing a "systematically developed and potentially persuasive" account of semantic content rests heavier on the strong AI opponent.  I say it does. 

Enter "naive AI".  A humble doctrine (if that), humbly named, combining what one of my committee members called "the lowest form of argument" (burden of proof) with raw empiricism.  I'm not proud, but here I stand: here, I say, the matter stands.  Against such naive AI you contend,

Hauser thus considers the fact that computer doings commonly “inspire such [mental] predications” to offer a “wealth of empirical evidence” for strong AI (p. 212). But this naïve AI argument begs the question and is, well, naive. Everyone agrees that we have a natural tendency to use mental language in regard to computers. This is a statement of the phenomenon that is the target of the dispute, not an argument for one explanation or another.

As for naivete: the case I make appeals to artless, ingenuous, judgments marked by their unaffected simplicity: that's the raw empirical part.  As for this appeal being credulous or deficient in worldly wisdom (naive in the pejorative sense) ... to this, I say, there is no such wisdom extant.  There is no systematically developed and potentially persuasive anti AI argument on offer to be ignorant of: that's the burden of proof part. 

As for begging the question ... I'll say this.  Burden of proof arguments and the fallacy of begging the question have something in common.  Begging the question fallacies are unusual, as fallacies go, in being valid; even, perchance, sound (if their premises happen to be true). The trouble is that (since the conclusion is included in the premises) the premises can provide no further support for the premises; and that, after all, is the argumentative point: the failing is rather evidential and rhetotical than logical.  It's a meta fallacy (so to speak) whose diagnosis requires semantic ascent ... from talk about the matter at issue, to talk about the structure and context of the argument itself.  Burden of proof is likewise a meta strategem.  Indeed, it is not "an argument for one explanation or another" and naive AI would not provide the sort of theoretic justification of strong AI you demand; rather, it rejects that demand as premature (or worse).  I would say the phenomenon that is the target of the AI dispute, moreover, is not "that we have a natural tendency to use mental language in regard to computers" but rather the appearances that (irresistably, it seems) engage that tendency.  Of course everyone recognizes these facts.  Naive AI asserts that their evidentiary force has been neglected -- by AI deniers (for obvious reasons), and by AI promoters due to their grandiose theoretic (computationalist) ambitions.  Since I am challenging the common agreement that appearances to date do not underwrite credible mental attributions and your reply, in effect, by is to appeal to that common agreement, it seems to me that it is you who beg the question. 

"The fact that that terms are used does not imply that they are being used literally or (even if intended literally) that they literally apply" ... of course.  Our artless, ingenuous, judgments are not necessarily correct.  I only say they carry a certain evidentiary weight: that weight is the burden of proof that AI deniers must overcome.  Of course we might be speaking figuratively and equivocally but here still "Occam's Eraser") thou shalt not impute ambiguity unnecessarily.  Again, the burden of proof, is on those who would claim our naive judgments are figurative or equivocal: furthermore, as I have pointed out, application of standard ambiguity tests to machine-mental predications fails to diagnose any such ambiguity.