The Tetrast2 - Speculation Lounge
Sketcher of various interrelated fourfolds.
Don’t miss the grove for the tree(s).

Nontrivia

March 31, 2010.

Recentest significant edit: December 19, 2013.

In my previous post "Unsettlings" I discussed a double opposition, or "double chiasm" as I called it, among the (cognitive) lights in which a given phenomenon would seem (1) simpler or (2) more usual or normal or (3) clearer, more clarificatory, more significant or informative, or (4) deeper, less trivial:

1. Simplicity,
optimality, etc.



2. Likeliness,
probability, etc.
X of crossing diagonals. Each diagonal itself is a narrow X.3. Informativeness,
significance, etc.



4. Nontriviality,
depth, complexity, etc.

The first three correlate pretty obviously to mathematics of optimization, mathematics of probability, and mathematics of information. The fourth (nontriviality, depth, etc.),in at least one sense, seems to me to correlate to mathematical logic.

Inverseness between probability and information.

A message's quantity of information, its amount of informativeness or "newsiness," reflects the improbability of that message before it was sent. The information quantity is not usually quantified simply as 1 minus the message's erstwhile probability (e.g., 1 minus 1/8 probability equals 7/8 improbability), but still it's pretty simple, the logarithm of the reciprocal of the erstwhile probability (for example 1/8 probability, ergo 3 bits). The smaller the erstwhile probability, the larger the logarithm (to a given base), and we can think of information as a kind of inverse of probability.

So, inverseness between optimality and nontriviality?

I mean "optima" as in mathematical programming a.k.a optimization. If one puts, as I do, optimality and nontriviality/depth likewise into an opposition, one might expect a similar kind of inverseness. Lloyd's and Pagels's idea of thermodynamic depth is "the entropy of the ensemble of possible trajectories leading to the current state" (from Cosma Shalizi's notebook on complexity measures) and gets us the idea of some sort of opposite or inverse of the shortest path (simple, optimal, etc.). Then there is the idea of algorithmic complexity, the shortest program capable of obtaining a given result, which complexity is uncomputable because of the halting problem, and anyway the general idea that you can't get a ten-pound theorem out of five pounds of axioms (as discussed by Chaitin). So by merely looking for "big-picture" patterns (and roving through things like the Mathematical Subject Classification), I seem, despite my amateurish ignorance, to have found myself in the right neighborhood.

Shalizi, with a bluntness that is helpful to the general reader, starts out his above-linked notebook "Complexity Measures" with this striking paragraph:

C'est magnifique, mais ce n'est pas de la science. (Lots of 'em ain't that splendid, either.) This is, in the word of the estimable Dave Feldman (who taught me most of what I know about it, but has rather less jaundiced views), a "micro-field" within the soi-disant study of complexity. Every few months seems to produce another paper proposing yet another measure of complexity, generally a quantity which can't be computed for anything you'd actually care to know about, if at all. These quantities are almost never related to any other variable, so they form no part of any theory telling us when or how things get complex, and are usually just quantification for quantification's own sweet sake.
Now, one may note that there also seems no general quantification of "optimality," either — instead, one seeks specific optima. As for a quantity of feasibility, it might just be a roundabout way of locating an optimum (a located feasible solution getting characterized, say, by a distance and direction from the optimum). If feasibility is considered just lowness of cost (compared to a highest feasible cost) or size of net benefit (minus some lowest feasible net benefit, I guess) or some unifying generalization of those ideas (I don't know what), it still isn't like a ratio, comparable across disparate cases. If we set the optimum to unity in order to get that comparability, then can feasibility come out like probability? There's a duality between optimization and probability where cost corresponds to probability. (One would think it more intuitive that cost would correspond to improbability formulated somehow, but I don't know whether that leads to problems or is merely less convenient for expositing the duality. Here's a paper (PDF) of which I understood maybe three sentences and one formula. Update 5/17/2013:: I think I get it now. An extremal solution is generally less likely to be chosen at random. A straight line is the opposite of a random walk. Something like that. The first paragraph of the linked paper "Duality between Probability and Optimization" ends: "To the probability of an event corresponds the cost of a set of decisions. To random variables correspond decision variables." End of update.)

Anyway, so maybe it's the same for the nontrivial as for the optimal. One doesn't typically seek an amount of nontriviality, instead one typically seeks nontrivia, complexuses, etc. Now, it's not so hard to understand what constitutes an optimal case, a probable case, and an informative or "newsy" case. But, if nontrivia are to be considered as being on some sort of par with optima, probabilities, and information, then what constitutes a nontrivium, a nontrivial case? Insofar as the words "nontrivial" and "deep" are typically used in a sense that does not invite precise determination, I think it better to speak of contingent truths pertinent to a question at hand, i.e., which help support non-vacuous conclusions. These could be data in a typical sense, or some sort of standing givens such as mutually independent (and consistent) postulates - one way or another, givens (for which the word "data" is the Latin).

Now, there are some other big-picture considerations here.

I'm thinking philosophically, analogically, so please bear with me. There are temporal issues involved with the conceptions of optima, probabilities, and information.

1. Optima and feasibles

are, for lack of a better word, potentialities (with the optima as "debentialities," lowest or most efficient potential expenditures, what would really be owed) for what could happen or be done given things as they stand; the impact of directly revealing or acting if one were to reveal or act now (the moment of decision); correlated more or less to the surface of the future light cone.

2. Probabilities

pertain to what is going to happen in the course of a future in virtue of repetitions; that which does happen thereby reaches 100% probability.

3. Information

is newsiness and pertains to what is coming to light or being actualized (correlated more or less to the surface of the past light cone) but not already settled; if the message's information is already known, then the information is zero.

So we have this pattern (of characterizations, not definitions):

Image of rough analogy with the light cone
Optima, probabilities, information, and factual bases, arranged as forming something like a light cone.
1. Optima, most feasible, simplest, most efficient, etc. — things worth supposing, imagining, etc.
2. Probabilities — things worth expecting.
3. Information — things worth noticing.

Ergo (by completing the analogy):

4. Independent givens / data — things worth remembering.

This associates truth or fact in some sense with the nontrivial or deep, as well as with the complex, the complicated, etc.; an extra and independent given is a complication. Some mazy and labyrinthine complications have a kind of triviality when they don't teach real lessons, still they can be worth remembering — ask any lab rat. The idea of that which offers lessons worth learning, remembering, etc., that which is "educational" in some sense, that from which lessons or more or less secure conclusions can be drawn, is another thing which distinguishes the nontrivial from the distinctive, informative, etc. We learn from the past; experience is the great teacher. But can it be that logic's concern is simply that datum, fact, or basis (e.g., some postulates) from which one can draw conclusions? What is the complexity in it - simply that it is non-tautologously true? This seems to be missing something in that which mathematicians mean by "nontrivial" and "deep."

There's still another big-picture issue — what you might call that of subjective nontriviality versus objective nontriviality

but which would better be called aspectual nontriviality versus transpectual nontriviality. I alluded to it above in distinguishing contingent relevant givens from the "nontrivial" in mathematicians' sense. This could be a newsy distinction, since I haven't found any notice of it as a possible source of confusion.

Take a nontrivial equivalence between mathematical propositions — its nontriviality is a nontriviality in outward aspect for the same reason as that behind mathematicians' joke that anything proven is trivial. I don't want to call it "subjective" since that would imply incorporating a subjective judgment into the reasoning itself about a mathematical structure, just as "subjective probability" suggests trying to quantify one's subjective expectation in a specific case. As for "transpectual," I just mean that as the opposite to "aspectual": if two statements are different in form but logically (or as it is sometimes said, "formally") equivalent, then they are different aspectually but the same transpectually (i.e., when you look through them enough). Equipollence (equivalence between propositions) is a transpectual simplicity; mutual non-implication is a transpectual complexity. Independence (or as some express it, independence and consistency) among axioms or postulates is a transpectual complexity. Nontriviality as a criterion of value of equipollential inferences is ironic, and is ironic and aspectual in the same way as analogous criteria for other modes of inference. (The ironic aspectual criteria may be used merely intuitively in devising methods of reasoning; whether one employs a method of incorporating specific subjective judgments of amount of likelihood or whatever into reasoning is another question, one which I'm not addressing.) Examination of the following pattern in the next table lend some subjective probability to my claim of a valued aspect that is ironic in relation to the character of the inference.

Were it not for these various aspects in which various conclusions put their premisses — aspects natural, verisimilar (in Peirce's sense), new, and nontrivial, — we would see no point in reasoning at all.

I think I've said that somewhere before, but I don't remember off-hand.

Mode of
inference:
Automatically
preserves:
Adds or removes
info or otherwise:
Ironic aspectual criterion:More irony. Some typical uses in science:
Surmise.Neither truth nor falsity.Adds & removes info.Naturalness, simplicity, facility. What's worth supposing. (Surface of the future.)Explaining (most simply) what has happened.
Induction.Not truth, but still falsity.Adds & doesn't remove info.Likeliness, Peircean verisimilitude. What's worth expecting. (Future.)Analyzing (likely) what is happening.
Forward-only deduction.Truth, but not falsity.Removes & doesn't add info.Novelty, noteworthiness. What's worth noticing. (Surface of the past.)Predicting (distinctively) what is going to happen.
Equipollential deduction.Truth & falsity.Neither removes nor adds info.Nontriviality, depth, complexity. What's worth remembering. (Past.)Conditionally predicting (nontrivially) what would happen (reproducibility & more, not merely repeatability). The lesson, getting learned and applied.

(Note: mathematical conclusions are often through equipollential deduction. For a common example, the induction step in mathematical induction is equipollential: the conjunction of the ancestral case and the heredity is equipollent to the conclusion. The conclusion is a universal hypothetical (in form) while the ancestral case is an existential particular, but the equipollence is intact because the existence of the well-ordered set to whose elements the hypothetical conclusion refers is already assumed and usually actually already proven. In a simpler case than mathematical induction, in a nonempty universe "whatever there is, is blue" (hypothetical in form) validly implies the existential "there is something blue." Proofs of the ancestral case and the heredity are often through equipollential deductions, though sometimes not so, especially when greater-than or less-than statements get involved.)

Modern science has been builded after the model of Galileo, who founded it on il lume naturale. That truly inspired prophet had said that, of two hypotheses, the simpler is to be preferred; but I was formerly one of those who, in our dull self-conceit fancying ourselves more sly than he, twisted the maxim to mean the logically simpler, the one that adds the least to what has been observed, in spite of three obvious objections: first, that so there was no support for any hypothesis; secondly, that by the same token we ought to content ourselves with simply formulating the special observations actually made; and thirdly, that every advance of science that further opens the truth to our view discloses a world of unexpected complications. It was not until long experience forced me to realise that subsequent discoveries were every time showing I had been wrong, while those who understood the maxim as Galileo had done, early unlocked the secret, that the scales fell from my eyes and my mind awoke to the broad and flaming daylight that it is the simpler Hypothesis in the sense of the more facile and natural, the one that instinct suggests, that must be preferred; for the reason that unless man have a natural bent in accordance with nature's, he has no chance of understanding nature at all. Many tests of this principal and positive fact, relating as well to my own studies as to the researches of others, have confirmed me in this opinion; and when I shall come to set them forth in a book, their array will convince everybody. Oh no! I am forgetting that armour, impenetrable by accurate thought, in which the rank and file of minds are clad! They may, for example, get the notion that my proposition involves a denial of the rigidity of the laws of association: it would be quite on a par with much that is current. I do not mean that logical simplicity is a consideration of no value at all, but only that its value is badly secondary to that of simplicity in the other sense.

— Charles Sanders Peirce," A Neglected Argument for the Reality of God."

1. A surmise to a best or "optimal" explanation seeks a kind of aspectual optimality, one for which we do not expect a deductive standard optimization algorithm. Such a surmise, seeking simplicity, is actually complex, and usually both adds and removes information (or data, or claims, or howsoever you wish to think of it). That's ironic. It's as if such surmise were seeking to compensate for its own complexity by seeking simplicity. Some speak in this connection of parsimony in hypothesis-formation, but the desirable simplicity should not be confused with "logical simplicity" as Peirce notes (see sidebar) — rather it's an idea of that which is most natural, "facile" as Peirce says or feasible. The hypothesis imputes a phenomenon to causes or reasons to which the phenomenon seems to point. The hypothesis points a way that seems ready to lead to further indications. The aspect of naturalness and facility is not only for hypotheses in the usual sense. Insofar as any theory's bad match to experimental results (in physical, material, and biological sciences and in human and social studies) can be explained away by additional hypotheses, there's always a role for the simplest expanation — the simplest "hypothesis" to account for a theory's persistent bad results is that the theory is wrong.
  Now, in order to distinguish the naturalness and facility desirable in a surmise, I will call it viatility. (Latin viare means to travel, take a path, from the Latin word via which means "way" or "path.")

2. An induction seeks a kind of aspectual probability or likeliness. At early or crude stages when there is insufficient data to support its conclusion with high confidence, one seeks at least an aspect or appearance of likeliness. Yet an induction, in adding unpremissed information into the conclusion (and omitting no premissed information from the conclusion), increases not probability but information. That's ironic. It's as if induction were seeking to compensate for its informativeness by seeking likeliness, by sticking to the assumption that the larger or total population will resemble the currently available data. The word verisimilitude has taken on meanings which differ from that which Peirce meant by the versimilitude of an inductive conclusion, a verisimilitude consisting in that, if pertinent further data were to continue endlessly to have the same character as the data supporting the conclusion, the conclusion would be proven true. That is not what is often currently meant by verisimilitude, the closeness of a theory to an issue's full truth that would be found by sufficient investigation; instead it means likeness to the less-than-whole truth or data garnered up until now. Ceteris paribus, the conclusion most faithful, most correlated, to the character of the currently available data seems the best candidate to hold up over time, increase in actual inveterateness as time goes by; it has a correlation that seems likely to lead to further correlations. The inductive conclusion should, aspectually, not seem to add anything new, but instead seem inveterate, typical, "conservative" in a sense, maybe even seem to remove or smooth information.
  In order to distinguish the verisimilitude desirable in an induction, I will call it veteratility, which is Peircean verisimilitude.

3. A categorical syllogism or other "forward-only" deduction seeks to bring information to light — but it doesn't really increase information, it reduces it. That's ironic. There's little that I can find about efforts to quantify the "psychological novelty" (as various folks have called it) or "new aspect" (as Peirce called it) or seeming informativeness of a forward-only deduction's conclusion. It's aspectual. Another way to look at it is that a forward-only deduction increases probability (if the premisses are assigned probabilities beween 0% and 100%); in order to be true, it doesn't absolutely need all (or sometimes any) of the premisses to also be true. Anyway, it's as if forward-only deduction were seeking to compensate for its decrease of information (or increase of probability) by seeking newsiness, especially in the sense of a meaningfulness, a difference that makes a difference. This kind of deduction elucidatively symbolizes formal implications drawn from premisses.
  In order to distinguish the news-like aspect desirable in a "forward-only" deduction, I will call it novatility.

4. So, continuing the pattern, the nontriviality of a deduction through equivalences or equipollences will be aspectual and ironic. Equipollential deduction neither adds nor removes information, and it's as if it were seeking to compensate for that simplicity with a kind of complexity in the sense here called aspectual. The transpectual complexity or complexus will involve independences, mutual non-redundancies, etc. It is surmise (by which I mean inference that both adds and removes information) that is transpectually nontrivial, even though surmise ironically seeks a kind of aspectual simplicity, naturalness, etc. Now, an aspectual informativeness (psychological novelty, new aspect, whatever one wishes to call it) is sought through a categorical syllogism or other forward-only defduction — an extrication of information by removing some of the clutter, so to speak, of the premisses. That (aspectual) informativeness is not to be confused with its kin, the (aspectual) nontriviality that is sought through equipollential deduction; that nontriviality consists (as far as I can tell) in the outward disparities of things bridged by a proven equipollence, a bridge which one may wish to cross and recross in either direction. The nontrivial or deep promises to stand as a basis for further such conclusions. The equipollential deduction stands as a further-transformable proxy or model of the premissed objects; if it is a true proxy (I don't mean a substitute index) or model, it is definitively determined by the same laws or postulates as the objects are amid transformations, but may be better for teaching their lessons through a given line of experimentation; indeed so, if it is nontrivial.
  In order to distinguish the nontriviality or depth desirable in an equipollential deduction, I should call it statility or tardatility in keeping with an analogy with special-relativistic kinematics that I've been using here (as well as the analogy to the light cone's zones that I used earlier above), but the coinages "statility" and "tardatility" inherit some extra connotations that don't seem to work well enough, so I will call it basatility.

Viatility (kinematic analogue: ∆d, change of displacement)
— a surmise's plausibility in the sense of naturalness, facility.

Veteratility (kinematic analogue: ∆τ, change of proper time)
— an induction's (Peircean) verisimilitude
X of crossing diagonals. Each diagonal itself is a narrow X. Novatility (kinematic analogue: t−∆τ)
— a 'forward-only' deduction's elucidative, novel aspect

Basatility (kinematic analogue: t−∆d, where c=1)
— an equipollential deduction's aspect of nontriviality or depth

Aspectual
Merit that inferences in a given mode
vary in having, but which seems
hard to quantify or render exact,
at least hard to do so fruitfully.
Transpectual
Content, quantity, or status
deducible from given parameters
of a total population,
universe of discourse, etc.
Worth supposing if pertinent.Viatile, natural, facile (surmise). Optimum / feasible.
Worth expecting if pertinent.Veteratile, verisimilar in Peirce's sense (induction).Probability.
Worth noticing if pertinent.Novatile, new in aspect ('forward-only' deduction).Information.
Worth remembering if pertinent.Basatile, nontrivial (equipollential deduction).Independent given, fact, datum.

Now is a time at the Speculation Lounge when we speculate. One might ask, for example, does the "veteratility" of an induction consist in the probability that the premissed sample would have if the inductive conclusion were true? Likewise does the 'viatility' of an abduction consist in the optimality or high feasibility that a phenomenon would have if the explanation were true? Does the 'novatility' of a 'forward-only' deduction consist in the information (truthful news) that the conclusion would re-state if the (non-axiomatic / non-postulational) premisses were truthful news? I suspect that the answer is no.

The case of surmise is instructive. Now, what I mean by 'surmise' is much like what Peirce meant by "abductive inference". Peirce in his later years allows as a form of abduction the inference to a new rule combined with a special circusmstance, to explain a surpriing phenomenon. Although one can formulate an inference to a rule in such a way that it is neither automatically truth-preservative nor automatically falsity-preservative (my definition of surmise) such as the toylike example "EGH ergo A(G-->H)", I think that a surmise involving a new rule usually really involves a hypothetical induction, and that such an induction is involved whether it originates/modifies a rule or merely extends a rule to cover the surprising phenomenon. The surmise itself is the basis for a hypothetical induction, as well a hypothetical deduction of testable consequences. The rule in question may be an extremal principle or a combination of constraints, or a distribution or frequency, etc., or an informative rule of dependency, or something else. Note that I am not discussing at this point the inductive evaluation of tests of a prediction deduced from the explanatory hypothesis/surmise. The question at this point is whether the induced rule would continue to assign all the cases under it appropriate optima/feasibility, probability, informativeness, or whatever. So one can deduce the resultant optima/feasibilities, or probabitilities, or whatver, covering all known cases, and, in light of that, at least try to decide whether the induction is veteratile enough (I'm unsure about what sort of inference such a decision itself involves). As for viatility, one may consider that without regard to the veteratility of the induced or extended rule - would an elliptical orbit account for the surprising observations of a given planet, apart from whether there's a more general rule of elliptical orbits? That is like asking whether a person walked in an elliptical path, apart from whether many people do that as a rule. This can be a chiefly mathematical question, whose terms may be embodied approximately by embodying a model that works according to one's hypothesis; think of a professional magician like James Randi producing a supposedly paranormal effect just to show that it can be done by non-paraormal means; a kind of proof of concept; the surmise may involve not a pattern of motion but a composition of materials or attributes, and so on.

Anyway, at some point one deduces testable predictions (and here I think that novatility has a role), and inductively evaluates the repeated tests; and thoee reproductions of tests simiilar enough to be counted as repetitions; as for reproductions in different, though in some sense equivalent, forms, their collective evaluation becomes abductive, 'surmisual'. In the end, it's all surmise-based, soever cogent the surmises, insofar as the premisses in a special science come down to perceptual judgments; the special-scientific application, for example, of an induction implies the defeasible surmise that all its premisses are in fact true and depends for its validity on the idea that sufficient further research would correct its errors.

Plausibility (viatility) and Peircean verisimilitude (veteratility) suggest truth; a novel aspect (novatility) and nontriviality (basatility) suggest falsity.

That answers the following symmetry-based objection:

It might be objected that while 'viatility'(naturalness, plausbility) and 'veteratility' (Peircean verisimilitude) incline one at least a little toward holding a conclusion to be true, on the other hand 'novatility' and 'basatility' do not seem to do so (except in the sense perhaps of fostering the hope). Besides, if the deduction is valid, then its conclusion is true if its premisses are true; how would novelty or nontriviality increase that kind of assurance? There seems something inconsistent or non-symmetrical about it.

However, there's a deeper symmetry. To the extent that a deductive conclusion seems 'novatile' (novel) or 'basatile' (like an independent basis), that appearance may naturally incline one to doubt the conclusion, incline one to the conclusion's possible falsity, and towards checking one's premisses and inferences. It is not unheard of that one's premisses and/or inferences sometimes involve errors. Doubt is not always a bad thing. Why shouldn't occasions for doubt occur naturally in good necessary inference? I'd say that occasions for doubt will occur if the deduction is worth doing. A kind of dubitability is built, so to speak, into the nature of reason, reasoning that is deductively necessary as well as reasoning that is non-deductive and contingent.

Of course, in a certain sense, a solution or conclusion can seem all too 'viatile' - seem glib or facile in that word's present-day usual sense, - or all too 'veteratile' - seem too conservative, standard, nothing-to-see-here, etc. Still, I think that, to the extent that a conclusion is not skewed by hope, fear, etc., a surmise's viatility and an induction's veteratility properly favor their respective conclusions (variably but at least a little), while deductions' novatility and basatility properly disfavor their conclusions (variably but at least a little).

Meanwhile, as to the nontrivial, the complication, the datum, can't I do better than I've done?

Here I'm describing as aspectual the typical sense of "nontrivial" in mathematical talk but, but what makes something "transpectually" nontrivial?

Update December 28, 2013: I think that I have now done better. Arity, adicity (as in monadic, dyadic, triadic, etc., seems to be the "transpectual complication" that I was looking for. Scroll down to the update under "Correlated operations." End of update.

Is that kind of nontrivial simply a set of independent facts or truths or givens, i.e. they couldn't have proven or disproven one another? Do they have to be "facts or givens worth remembering" or is it enough that their interrelations are facts or givens worth remembering? Are such complexuses really the core of logical ideas such that logic should have been named for them ("givens theory" or "complexus theory" or whatever), just as probability theory is named for probability, and so on? They may be optimal or otherwise (or more precisely, perhaps, they may be such that they would have been optimal or otherwise); but they are the paths which have been traveled, the structures which have been built. Is that it? Is a "transpectually nontrivial" statement simply one that is consistent and materially true, and perhaps, pertienent to a question at hand, and not just not tautologously true? But isn't logic about formal truth, not material truth?

Actually that's not what bothers me. Basic deductive logic is about deducing material truths from other material truths - more or less, facts from facts, be the basal facts postulated or established observationally or merely supposed as premisses for the sake of argument. In that sense deductive logic is about material or nontautologous truths in the same sense that probability theory is about probabilities (and optimization theory is about optima, and information theory is about information). I like that idea of transpectual nontriviality: it avoids suggesting that lengthy convolution of an argument is the essence of nontriviality or depth and somehow logically "better," more "logicful," when in real life such an argument is riskier, less likely to escape a weakest-link problem. Such convolution increases aspectual nontriviality (sometimes only in a superficial way, to boot), not transpectual nontriviality, much less security or factuality.

A deduction does not automatically turn its concluding proposition into a truth of logic. The fact that Socrates is mortal is a material truth even if deduced from other material truths, even if deduced from a postulate or axiom that Socrates is mortal. If it is postulated that Socrates is mortal in advance of premisses, a premissual proposition "Socrates is mortal" is part of the tautology "Socrates is mortal by the postulate that Socrates is mortal", but the fact, the datum, that Socrates is mortal is not tautologously true. The nontrivium is that basis on which conclusions - further bases - can be drawn. A set of such postulates, or, say Euclid's five postulates, independent (and consistent), have more transpectual nontriviality or depth than any single such postulate. So, if nontriviality can't be usefully quantified (except as number of independent givens or postulates or the like, whatever that tells you, given their varying internal complexities, even assuming that each is "indivisible" or "atomic" in some sense), maybe it can at least be ordered. Add a postulate, enrich or deepen the system - transpectually if not aspectually. (Should one say that Gödel statements are transpectually nontrivial but aspectually trivial in the mathematical system in which they are true but unprovable?) Even an axiom of propositional logic is not completely trivial, when it is introduced as an axiom, though from it by itself there follows little if anything. Those considerations may seem a bit slippery but they're not what bother me.

What bothers me is that in a sense I'm saying that "transpectual" nontrivia are basically data, givens, facts, i.e., such that one can draw conclusions from them (well, that's the good part), but data are often quantified just like information, in bits, bytes, etc.; so, are data really something different from information or are they merely information such that one doesn't demand that they be new, previously unknown, etc.?

Maybe I shouldn't make a big deal about it, and I already fear that this is one of the most ignorance-parading posts that I've ever written. After all, as I mentioned, there's a duality between optimization and probability where cost (a kind of lowness of feasibility) corresponds to probability. (To repeat myself: one would think it more intuitive that cost would correspond to improbability formulated as 1 minus probability, but I don't know whether that leads to problems or is merely less convenient for expositing the duality.) An amount of information depends in a sense on what question was asked. Did a given horse win a race? Yes or no? That's one bit of information, as if the probability of the horse's winning had been 50% when it almost certainly was not. So maybe I shouldn't worry about data's seeming like information any more than about feasibility's seeming like probability. Now, as to a datum qua datum, we're concerned not with how newsy it is, how improbable it was before it happened, given that which was already known, etc., but with the complication or complexification that it brings (what would have been its "suboptimal" character before it happened) and what conclusions can be drawn from it. Some say that information is a difference that makes a difference. Perhaps one could say that a nontrivium is a basis for a further basis. That's much like the online Merriam-Webster's first definition of data: "factual information (as measurements or statistics) used as a basis for reasoning, discussion, or calculation."

It also bothers me that this reduces complexity/complication to a kind of randomness. It's as if, in going conceptually from optima to probabilities to information to facts, one settles into a kind of heat death of material or non-logical truths. All I can think of at the moment is that the randomness is real in a sense, but that it's why it matters that the data be data, facts, givens in some sense, not just newsy announcements, or probables, or optima or feasibles.

Achilles and the Tortoise and the Hare.

Here's another way to see optima (simplicities, efficiencies, etc.) and independent givens / nontrivia as opposites in some sense.

A problem with the quantity called the logical depth of a theorem is that it involves the number of steps that it takes to prove the theorem. If somebody finds a shorter proof, then the quantity changes; it's not "hard-core." Shalizi asserts the shortfall or lack of general usefulness of quantities of depth/complexity, and again I note likewise the lack of a generally useful quanitity of optimality or feasibility; instead we wish to know what are the optima, what are the complications/data/nontrivia, etc. Now, we can think of a proof's steps as "middles," and more generally, any but the shortest proof as feasible but not optimal or shortest-path. But in the mathematical case, the shortest possible proof is the postulate set followed directly by the theorem as conclusion, even if we slowpokes don't see how the postulates lead to the theorem. We don't desire the truly shortest proof in the case of a theorem. We want something less short than shortest - but not the longest, most circuitous, either, which would just be the opposite extreme.

Consider Lewis Carroll's example of Achilles and the Tortoise. (I thought of Carroll's example because Cathy Legg has been writing about it). The Tortoise is slow and can't see how to get from the two premisses all A is B and all B is C, to the conclusion all A is C. So Achilles adds a third premiss, if all A is B and if all B is C, then all A is C. But the Tortoise doesn't see how to get from the now-three premisses to the conclusion that indeed all A is C. Now suppose that the Hare joins them, and they start discussing Euclidean geometry. Achilles can't get the Tortoise to infer from the five postulates to the Pythagorean Theorem. On the other hand, Achilles can't get the Hare to pause for the intermediate steps between the five postulates and the Pythagorean Theorem, or even for the Pythagorean Theorem itself; the Hare would always go straight from postulates to conclusion, except the Hare is even quicker than that. Insofar as the Pythagorean Theorem is itself an intermediate step to further theorems, the Hare sees no reason to pause with it. In fact the Hare rests satisfied with the five postulates as making sufficiently evident all the implied theorems without need for elaboration. For the Hare, all that we call theorems are but direct corollaries of the postulates and are not even worth mentioning. Some might say that it's as if the Hare naps and loses the race to Achilles and maybe to the Tortoise too. In any case, logic is not done from the Hare's viewpoint, nor from the other extreme, the Tortoise's viewpoint. Logic is properly concerned with givens and middles of various kinds, just as ordered sets are concerned with heredity, convergence, etc.

Correlated operations.

Looking back at optima for a hint — maybe there's no standard way to quantify optimality, but one can often think of an optimum as a distance with a direction or directions — a shortest path for instance, or the location of a minimum of a curve, etc. Even if it's only a rough idea, still one discerns a pattern, one that I've noticed before:

optimum — difference (with direction or directions)
probability — ratio
information — logarithm


Note that this blog's title does include the phrase "Speculation Lounge"! Now for a look into the Speculation Lounge's Rank-Speculation Sub-Lounge.

So one might expect, simply on the superficial appearance of the pattern, that for the nontrivial one might be able to think of it as the next in the series "difference, ratio, logarithm." As to the ordering "optimum, probability, information," I didn't reach that from considering the pattern "difference, ratio, logarithm." Instead I got it as part of a broad pattern (see table on right).
Some sort of proportion or analogy here.
  • Optima.
  • Decision processes.
  • Motion, forces.
  • Probability.
  • Stochastic processes.
  • Matter.
  • Information.
  • Communication processes.
  • Life.
  • Data, nontrivia,
    bases (for further conclusions).
  • Logic, learning processes.
  • Mind.

    Well, it's hard to decide the next term after "logarithm" with confidence, when one expects only a four-term series (I expect it for various reasons including the fourfold correlations outlined earlier in this post). Now, subtraction (finding a difference) is the inverse of addition, and division (finding a quotient or ratio) is the inverse of multiplication. Yet, finding a logarithm is one of two inverses of exponentiation (raising to a power); the other being to find a root or base. A root with a direction? (Now I'm thinking of complex roots). A base?

    Update December 18, 2013: I now think that it's a root or base, corresponding to arity, adicity, valency, etc., as in monadic, dyadic, triadic, etc. A succession of relations of constant arity is a root or base raised to successively higher powers. In first-order logic, the first-power arity is the usual concern, and arity with more than one degree is what deepens its interest, that is, polyadicity is where such logic becomes nontrivial. Quine, in Methods of Logic, 4th Edition, p. 137:

    We have our test of validity by existential conditionals, and little is left to be desired—until we move from absolute or monadic terms like 'book' to relative or dyadic terms like 'uncle'. This is the move that complicates logic and makes for its stature as a serious subject.

    Polyadicity is itself a kind of complication, and is a source of complication in logic. It seems to fill the bill as the "transpectual" kind of complexity or complication that I had been seeking earlier in this post. It is a property of special, perhaps even partly definitive, interest in regard to data, givens. Now, in a relational database, the order of the so-called attributes related by a relation does not matter, but in logic, questions of order in a polyad certainly can matter, so size is not the only thing that can matter for arity. Hence, direction can matter in the movement from a polyadic term to its objects, i.e., in a dyad, the direction of movement from the dyadic predicate to the left subject or the right subject.

    So, if this is the case, then deductive logic is the deductive study of data, givens, bases for drawing conclusions, and processes of inference to conclusions as further bases, the data themselves understood as adically related or composed, i.e., as "datads" or "datumplexes" or "dedomenads" (words that I've just coined, the last one is from Greek). End of update.

    Multi-valued logic? MVL has not been a big, thriving field, so far as I can tell, but on the other hand fuzzy logic is a kind of MVL, so maybe I shouldn't speak so fast. Anyway, if you have a higher numeric base, a larger alphabet, a larger lexicon, etc., you can express things with more concision, in a sense you have increased memory capacity too, do you have an increase in some sense in that which is worth remembering (learn the ABCs, expand your vocabulary, etc.)? (I resist this in part because of the terminological coincidence between a numeric base and a basis for a conclusion. Is it just a pun of ideas?) The other alternative seems to be the hyperlogarithm, or maybe an endless series (hyperlog, hyper-hyperlog, etc.), some sort of orders of nontriviality; one starts thinking of powersets and so on. Now, all that I'm seeking here is an idea in terms of which we can think merely roughly of the nontrivial, but this sort of thing leaves me shaking my head as usual.

    So I have to leave it here for the time being as it stands. It's a difficult question that has me taking shots in the dark.

    (Note on the double-chiasm image near post's top: I've given Hyatt Carter total permission to use the image freely as he pleases, for example here.)

    Comments: Post a Comment

    .
    .
    .
    .