Tuesday, October 27, 2009

Comment on Patentlyo Post –Tracking the Use of Continuations – Lemley Sampat 10/26/09




This post gets to several issues germane to Lemley and Sampat’s recent guest post on Patentlyo.com (“Tracking the Use of Continuations” 10/26/09). The post characterizes the patent process as a continuing process of negotiation which I completely agree with. Additionally, I believe the discussion on continuations may assign the intention of “opt[ing] for a slower process” to many applicants whose intentions are in the right place, but appear to contumely delay prosecution when problems inherent to prosecution are to blame. By thinking of the patent process as a binary search and finding ways to curb abuse of continuations that account for both the limitations in patent prosecution and the real need for continuations in some circumstances, a model that treats Examiners and practitioners with equanimity may be resolved by avoiding extremes.

Problem of Intangible Boundaries: An imprecise binary search for precise claim language


The patent process is completely a process of negotiating. From the filing of an application the applicant and the Examiner are poised to take part in a binary search of sorts (apologies in advance for banal algorithmic metaphors!). The binary search is a search for the claim language that perfectly describes the enabled technology just outside the penumbra of the prior art. The search starts with a likely over broad initial claim set filed with the application. The Examiner does the requisite search and sets the bounds for the search: one boundary is the applicant’s claims and other boundary is the prior art.

More technology
---------- Prior art
---------- Nebulous search territory on which to negotiate claims.
---------- Claims initially filed.
Less Technology


After the first office action part of the capacious “nebulous search territory” has been set by the Examiner in the first part of the binary search. I refer to this as a “binary search” because in algorithmic parlance a binary search in an ordered set of numbers is to choose the point half way between the bounds of an ordered set. The range between the claims initially filed and the prior art may be abstractly analogized to an ordered set, and the first office action should, in theory be the abstract point half way between the range in the “nebulous search territory.” The process continues in the response when the applicant decides on another set of points between the what the Examiner has said in the first office action and where the applicant believes the bounds of the technology lie, and so on … While in theory the search ends with finding the perfect language encompassing the applicants technology, in practice this is hardly the case. Put bluntly, the claim language that is finally agreed in even an issued patent on is almost certainly not the exact abstract boundary of where the applicants technology comports with ALL patent laws (enablement, written description, novelty, obviousness, to name a few). In other words, the claim language in a finally issued patent is probably either more or less than the applicant deserves because of the inherent problem in describing intangible technological boundaries in concrete words (Call this the Problem of Intangible Boundaries? We should call it something!). This, I believe is the strongest reason for allowing at least some form of continuations because while the intangible boundaries may not mean much in a patent with a market of $200,000, it will mean a lot in a market worth $200,000,000.

Continuations: Avoiding the extremes to promote a spirit of cooperation


The author’s state that many continuations in the form of RCE’s or normal continuations may be to delay prosecution and I agree that some and possibly many continuations may be for this purpose. However, at least some continuations filed are not to delay prosecution, or modify applications to track the marketplace, but to solve the Problem of Intangible Boundaries posited above. Several solutions have been tossed around to prevent prosecution delay, but I do not feel these solutions treat all parties equally. Some want to completely limit continuations a specific number. Others want to completely switch the burden of showing prior art to the applicant after a certain number of continuations. Still others want to charge a huge fee after each continuation. Some combination of all of the solutions may be appropriate, but any one is not likely to pass give the patent system the change it needs. Given the new spirit of cooperation I am feeling among the patent community with the passing of the new Examiner Count System, it seems likely that Mr. Kappos and the Patent Bar will proceed with equanimity. I just hope they don’t forget that some form of unimpeded continuations are necessary given one of the inherent difficulties in prosecution: the Problem of Intangible Boundaries.

Monday, October 26, 2009

Using Propositional Calculus to Define Anthropic Principle




One definition of Anthropic Principle states that if some property of the universe or the Laws of Physics was not true we couldn’t exist (The Cosmic Landscape, Susskind, p79). “The Laws of Physics have to be such that they allow life because if they weren’t, there wouldn’t be anyone to ask about the Laws of Physics.” (p197). I think it’s important to go through the definitions of the Anthropic Principle with formal logic and propositional calculus to really get to the truth of what Anthropic Principle is saying, if its saying anything at all.

With all the different forms of Anthropic reasoning, its hard to pull out what logicians call a “well formed formula” (or wffs) from the various definitions that are out there. Take for example the statement, when it rains, the ground is wet. The statement may be reduced to R -> W through modes ponens reasoning if we define the wff R to be “when it rains” and W to be “the ground is wet.”

R -> W

In these statements “when it rains” and the ground is wet” are both well formed formula, the absolute basic building blocks in forming well reasoned logical statements.

Part of the problem with Anthropic Principle is that the very basis of arguments surrounding are based on shoddy well-formed formulas. For example, in one of the previous posts (Leonard Susskind’s the Cosmic Landscape …) Anthropic Principle was defined as “the universe exists for us to observe it.” Stated this way it is hard to know exactly what well-formed formulas may be extracted for logical deduction beyond a mere tautological statements. We have choices for what our well-formed formula may be. By hypothesis, taking “the universe exists for us to observe it” to be U as a well-formed formula, allows us to make no useful deductive statements.

U -> U & U; or possibly U -> U or U;

These are tautological statements. Let’s assume that the Anthropic Principle allows us to make more precise well-formed formulas such as U, the universe exists, and O, we observe the universe. We may say U <-> O from U->O, O ->U :

Prove: U -> O, O-> U Therefore U <-> O;
1. U ->O Assumption
2. O ->U Assumption
3. U <-> O 1,2 Biconditional Introduction (<-> I)

Using some imprecise translation: 1. U->O The universe exists so we can observe it. 2. O->U We observe the universe, so it exists. However, the statement 1.,that the universe exists so that we can observe it is a very strong logical statement and is at worse a circular argument, and at best statistical syllogism whose inductive probability is extremely weak for reason stated a previous post (Leonard Susskind’s Cosmic Landscape …). The problem is that if even one other observing being existed in the universe other than us, the inductive probability, the probability that a statement is true, would go from 100 % to 50 % because the universe could only be stated to be created ½ for us and ½ for them. This hardly makes the first assumption, “the universe exists so that we can observe it” a well-formed formula.

But more problematic than the statement that “the universe exists so that we can observe it” being a statistical syllogism, one made on the 100% assumption that we are the only observing life, is that it assumes its own conclusion. To say that the universe exists so that we can observe it is to say we observe the universe so it exists. The very statement assumes its conclusion, namely , that the universe exists for a particular purpose, so that we may observe it.

So the Anthropic Principles should be defined very carefully. Cursorily phrasing the definitions will most certainly result in some sort of circular reasoning or tautology. That being said, I am not sure that Anthropic Principles are anything more than circular reasoning or tautology. Leonard Susskind and others believe some form of Anthropic reasoning is probably true, but it’s not clear what well-formed formula, or what definition of Anthropic Principle they are talking about.

Tuesday, October 20, 2009

Leonard Susskind’s The Cosmic Landscape – And my comments on its discussion of the Anthropic Principle





I skimmed this book about a month ago to get the gist and it’s wonderful. Leonard Susskind has a flair for explaining complex ideas in simple terms and I commend him for it. Now, after going back over it I have discovered an anomaly! An aberration in the Cosmic Landscape that may be more form over the function described, but I think is worth chatting about. The description of the cosmological constant creates the impression in people’s minds about the cosmological constant and its relationship to Anthropic principle that leaves some ambiguity, or at least it did for me. Basically, I do not believe the uniqueness of the cosmological constant necessarily allows the deduction that the Anthropic principle is true.

While definitions may vary, most accepted definitions say Anthropic principle states that the universe exist so that we may exist to observe it! Let me be absolutely clear, Susskind does not completely adapt the conclusion that the cosmological constant proves the Anthropic principle. He simply analyzes this conclusion. But given the stated definition, it’s worth noting that the probability based arguments for the Anthropic principle lead to the conclusion that we are the only life capable of observation in our universe. Because some arguments use the cosmological constant as a linchpin to hold up Anthropic principles tenets, an analysis of the cosmological constant is necessary. The cosmological constant is an observed “vacuum energy” that exists throughout space. Its miniscule little vibrations (“quantum jitters”) that create sort of a background noise. The explanation of the value of why this cosmological constant is what it is, is one of the most important of all unsolved physics problems. It has puzzled scientists why the cosmological constant cancels out 119 decimal places to just the right value for the possibility of life.

This improbable event is critical to the Anthropic principle which states that the laws of physics, indeed the universe, exist so we may exist to observe them. The statement that the very laws of physics exist in all their Newtonian (substitute with favorite physicist, P.A.M. Dirac?) glory for us to exist is an absolute statement that requires a strong probability of p(exist) = 1. Stated differently, to say that the universe exists so that we may exist to observe it is to say that the probability of us being the only observing life in our universe is p (exists, but only us) = 1. But I don’t know if we can make such as strong statement. This probability of us being the only life capable of observation is linked to the remotely improbable event of the cosmological constant cancels out all 119 places necessitates a p(exists) = .999… (116 more 9’s but you get the point). This discrepancy of p = 1 * 10^-119 is significant. If you take into consideration the over 400 exoplanets discovered, some quite similar to earth, it may be possible to say, hypothetically, that p(exists) = 0.999999 because the chance of other life is still pretty remote (technically 1/409 planets have life so p(exists, only us) = 408/409 ?). But even the slightest probability of other observing (I use that word because that’s the word Anthropic principle uses) life invalidates the probability that we are the only observable life that exists a probability that must be 1 for us to conclude Anthropic principle is valid. It’s not so surprising because of the pretty extreme conclusions of the Anthropic principle, that it would require an extreme premise that we be the only observing life in the universe.

So, do aliens exist? Do they exist in a remote region of space by shear probability? Don’t know, and probably not. But I know Anthropic principle leads to some conclusions that push the limits of the laws of large numbers. I am not trying to disparage Anthropic principle (after all I am not a physicist, just a layperson) because after all this I do believe some form of it is actually partially true. And I am not just saying that because Susskind and other great minds may be believe it partly true. I simply believe the definition needs refining and Anthropic principle may be true for reasons dealing more with Quantum Mechanics which I will explain in a later post.

BTW: Susskind's book is awesome. Any inquisitive mind should buy it.

Monday, October 19, 2009

The Biggest Problem in Patent Law: Rates




The comments on this and other posts on this blog are my opinion.

The Biggest Problem in Patent Law: Rates
There are many aspects of the patent system that may be reformed, but one of the most important is an analysis of the intangibles rates of creation of new law (from the district courts and the Court of Appeals for the federal circuit) versus the rate at which it is interpreted, organized and fixed (CAFC and the Supreme Court). Essentially, the creation of new law is about double the rate (approximate guess, but it’s got to be pretty close or more) at which it is interpreted, and organized at the Supreme Court and this leads to an ongoing corpus of unworkable patent law that is quicksand for organization, structure and predictability in patent law.

A key example is the teaching suggestion or motivation to combine test that was created by the CAFC. It states that there must be a teaching, suggestion or motivation (TSM test, hereinafter) to combine elements in prior art to show a patent as obvious. The law has since 1966 said the Graham factors determine the whether a patent is obvious Graham v. John Deere Co. (383 U.S. 1, 148 USPQ 459 (1966)). The Graham factors look at 1) the scope and content of the prior art 2) the level of ordinary skill in the art 3) the differences between the claimed invention and the prior art; and 4) objective evidence of obviousness. Basically, the TSM test explicitly requires a teaching suggestion or motivation to combine elements in prior art which is completely opposite the Graham factors each of which require an objective non-specific evaluation of prior art and patents. Stated differently, the TSM test, a test which people openly and assiduously applied, was completely contrary to the established law in the Graham factors. This will most certainly create some unworkable law because from at least 1997 (Gambro Lundia AB v. Baxter Healthcare Corp., 110 F.3d 1573, 1579, 42 USPQ2d 1378, 1383 (Fed. Cir. 1997)) 1 the TSM test was dispositive on obviousness until it was overturned in KSR v. Teleflex in 2007 (KSR International Co. v. Teleflex Inc. (KSR), 550 U.S.___, 82 USPQ2d 1385 (2007)). So for at least 9 years district courts and CAFC not to mention practitioners and patent Examiners were applying law squarely inconsistent with the Graham factors. This rate of creation of new law has been detrimental to organization, structure and predictability because some law followed the TSM test, and some law followed the Graham factors.


Now it should be noted that there are many grey areas of law that require the district courts and a jury to determine the specific meaning of legally interpreted terms and material facts determined by juries. However, purely as a matter of law, the TSM test states the TSM is dispositive on obviousness and is logically inconsistent as a matter of law with the Graham factors determination of obviousness. In other words it’s important to note that the inconsistency is not mere bickering at multiple interpretations of law, but a logically inconsistent law that was applied contrary to another area of law for at least 9 years. This was a recipe for dissonance in patent law.

The Supreme Court responded brilliantly in 2007 to the TSM test by interpreting with a broad brush by overturning the TSM test as inconsistent with precedent. By doing this the law was changed, practitioner’s confusions assuaged, and the law finally set clear on what the obviousness test really was. However, there were still 9 years of precedent that was created and partially undone in 2006. It would and will take years to undo all the precedent created in those 9 years by reapplying the test set forth in KSR v. Teleflex.

The problem comes down to one of rates: Is the rate at which the Supreme Court interprets and organizes patent law fast enough to keep up with the rate new law is created at the CAFC and the district courts? It may not be now because leaving 9 years for potentially inconsistent law to build up creates an unworkable corpus of patent law to build up. I don’t know if the Supreme Court denied cert on this issue during those nine years and I don’t fault anyone for trying to interpret an area of law so complex and with as many issues involved as patent law. But the more the Supreme Court paints with broad strokes that remove complexities that are inherent to the already difficult problem of describing the boundaries of a technology in words, the more the rates of creation of new law will meet the rate of organization of law, leading to more overall structure and predictability in the rapidly increasing corpus of patent law. I believe this will tremendously benefit the patent system, an absolutely vital part of our economy.

1 - http://www.ll.georgetown.edu/Federal/judicial/fed/opinions/98opinions/98-1553.html

Wednesday, October 14, 2009

Kappos's Proposed Changes to the PTO Count System: the First Step





When I first read on Patentlyo.com about Kappos's changes to the Count System which essentially creates an incentive scheme for PTO Examiners ( http://www.patentlyo.com/patent/2009/10/changing-the-uspto-count-system-incrementally.html ), I was pretty impressed. He proposes a system that stymies the incentive for the Examiner to get counts for RCE's. His proposal is a first step to reforming the count system, but if something is not done to INCREASE THE COUNTS proportional to the AMOUNT OF TIME it takes an Examiner to dispose of a case quality will suffer.

Under "proposed package" he breaks down exactly how the count system would be revised, even breaking down the amount of counts to less than whole numbers. This is a great sign as it shows that Kappos is willing to give RCE's, first office actions on the merits, and final rejections different counts according to the amount of weight they deserve. The proper weighting of count to action is so precise that a first office action on the merits would get not 1 count, but 1.25 counts. More precision in accounting for incentives that an Examiner may or may not have is a GOOD THING.


But it only goes half way. The system proposed does a good job of properly creating incentives for each case as a whole, but leaves out any count adjustment proportional to the amount of time it takes to examine each INDIVIDUAL CASE. While a weighting of 1.25 counts for a first office action on the merits may be proper for a case with 3 independent claims and 25 total claims, to alot 1.25 counts for a case that has 100 independent claims and 300 total claims creates a disjunction. This disjunction would create an incentive for the Examiner to spend less time on the case with 400 claims.

Its worth introducing a new rule into the lexicon of patent parlance that may assist in correcting the above disjunction.

Definition - The Rule of Count-Hour proportionality: The amount of time an Examiner spends on a case should be directly proportional to the amount of counts an Examiner gets for that case.

Essentially, the counts scale with the number hours and therefore an Examiner, who is actively looking to increase his or her counts (by design), would look spend more time on a complicated case and get more counts from it. This is one of the biggest remaining problems at the PTO, and if fixed, could help increase the quality of patents.