Alfred Lang | ||
Mail List Contributions 1993 | ||
Contributions to the Peirce Discussion List | 1993 | |
@GenSem @CogEmot | ||
Last revised 98.11.02 | ||
Re: Awareness, Consciousness etc. | © 1998 by Alfred Lang | |
Scientific and educational use permitted | ||
Home || |
List Contributions 1993-
To: Peirce-L 93.11.27 Subject: Consciousness etc.
With great interest, though little time I have followed the discussions around Penrose's argument that a valid description of (human) intelligence needs to transcend the algorithmic domain and perhaps includes something like consciousness. Now, while I agree with the principal thesis, I doubt that a concept like consciousness with its heavy load of surplus over the centuries and its dualistic implications can do any good as long as it does not find a relational definition. (But I must confess not having seen Penrose's book.)
So I would like to sketch an attempt to construe of what can go beyond algorithms in terms of triadic semiotic. I understand algorithms to be finite rule sets applicable to definite operand sets which can produce finite or infinite sets of results; algorithms are at the base of enumerability, computability and decidability. In other words, algorithms can only produce definite infinite results, i.e. more of the same within a rule; and they cannot apply to indefinite operand sets, i.e. to something without their range.
Now, comparing this to intuitions of and experiences with open evolution of sign systems of biotic, personal or cultural kinds there seems to be some contradiction. And, historically, it was exactly that dilemma of enlightenment thinkers who, when having disposed of the idea of divine order were left with mechanical order on the one side and desires and facts of freedom on the other. I think the solution to lie in asymmetric system differentiation. It can probably best be described semiotically.
Any sign system of a minimal complexity can possibly show that evolutionary character of being both lawful and unpredictable, if it differentiates into (at least two) affine and related parts or partsystems, one of which tends toward the singular and the other toward the general. In other words, one of which functions by generalizing aspects of the other, which itself functions by specifying or individualizing effects from the first. So I feel that what we mean by notions such as consciousness is a question of kind rather than of degree of complexity. I think it possible to relate notions of heuristic to this idea of asymmetric affinity of related systems.
Let me explain: I speak of sign systems generally. The examples I have in mind are mostly brain-minds or person-culture-systems. "Related" refers to the possibility of semioses going between the partsystems. "Affine" is used to characterize an usual effect of co-evolution, namely that emerging partsystems have much in common or "know" of each other but are not complete replicas of each other. So a condition originating in partsystem A and having effects in partsystem B through some physical channel can go much beyond of what you would expect from simply analyzing the events in the channel relating the two partsystems. In my earlier introduced phrasing: system and partsystems are considered semions, the exchange between them semiosis.
By an example of Peirce (MS 318, CP 5.473): if, in a military exercise setting (system) an officer (partsystem A) shouts "Ground Arms!" (signal, channel), the soldier's compliance (partsystem B) is unthinkable without a lot of "context" both within the officer and soldiers and in the common surround, a sort of ground for the figure of intending, commanding, understanding and executing the order. Such a context is natural in the co-evolution of the partsystems assured by a common military training tradition. In fact, that is, in my view, what distinguishes semiosic from reactive causation, to remain in Peircean realms, and, in addition, what distinguishes triadic semiotic from codification-decodification or "postal package" semiotic (Rossi-Landi, see Petrilli in Semiotica, 97 1/2 1993).
Now, you can say the officer is a generalizing partsystem in comparison with the singularizing soldiers' partsystem. The officer functions on the basis of neglecting the individual soldiers, treats them all the same as a class, wants them to do exactly the same, functions in reference to some threshold of equality of preconditions and response. As soon as, in his perception, this threshold is passed, he reacts and has the exercise repeated. So long as things operate within class threshold the officer is indeed capable of commanding by one general the execution of many singulars within the soldiers' partsystem. You might complicate things by assuming the general command of, say, storming the fortress, whereupon only one of the soldiers might singularly execute their own particular action, some going directly, some sideways, some operating howitzers, others flying aircraft, etc., yet all of them governed by the one general operating plan presented in the command. Also the officer would evaluate the aggregate of all these singular operations and events brought about by them in view of the general operating plan and would give corrective or supplementing orders again on the basis of his general view and objectives. So you could say, the officer functions in an equivalent role to what we take conscious perceiving, comparing, deciding, directing etc.to be in our direct experience of guiding ourselves.
It seems to me that there are possibilities, but also, obviously and primarily, limits in representing that general-singular-partsystem-relation and exchange algorithmically. The requirement would be that you have predefined all possible singular events in the camp or battlefield in terms of a finite set of generals, i.e. that you have at hand or are yourself a Laplacian Demon who knows everything possible in all its possible relations of all times. Now this forbids partsystems of relative autonomy, this forbids true development. The only certain way would be to have constructed all possible singulars as definite specifications of one's own generals in advance, i.e. what mathematicians do. And self-contained, closed computer-systems.
As soon as you have an (human or other machine) interface in that machine whereupon some partner can operate in a manner that pleases him not to fit predefined classes of the machine, things will go awry. It is obvious that defining a residual class of everything not defined is a bad clutch with possibly desasterous effects. On the other hand, having the generalizing partsystem develop itself its generals upon input from the singularizing system would do. I would suppose this to demand similarity operations. Now similarity in reference to open systems is inimical to algorithms altogether. Things might change when reasonable distributed (re)presentation (PDS, neural nets, etc.) becomes operational. I have not seen so far, however, PDP designs that differentiate in the manner proposed above (but I might have overlooked something). PDPs seem to be excellent generalizers; but can they also singularize? And how can the two principles be brought to cooperate in machines?
I think that is where notions of triadic semiosis can enter the field. Mediating interpretants, as I understand them, are basically generalizers by converging many possible referents upon one singular presentant. (That is what I understand to be Peirces notion of a law.) Interpretants differentiate large or small infinite or even indefinite variation of a referent field upon singular presentants, say, e.g. numbers of social situations into either the word "gavagai" or some "not-gavagai". To the extent that they succeed then in new contexts in that field to have the general ref-sign "gavagai" generate new singular pre-signs in a consistent, i.e. generalizable in turn, manner, they govern things in that field. Isn't that what officers or other leaders do in social systems or what we feel our conscious attending and intending does in conducting us as a person?
However, I would like to shift from consciousness terms to notions of asymmetric subsystem formation und functioning. Within the person, I thus construe of secondary subsystems such as conscious experience, imagination, inner language, and even something like the self; secondary, because they stand in this asymmetric generalizing relation to the rest of the more direct and mainly singularizing, more routine brain-mind functions.
On the other hand, when algorithms apply to some set of operands, we observe degenerated or habituated interpretants in the sense that their triadicity reduces to dyadicity, such as in instincts or other routines, for example: they "know" what to expect, are capable to react categorically, can produce any number of the same kind of things to represent a class or a sequence; yet they are "helples" when unexpected situations arise and they are uncapable of producing qualitative novelty. As a side insight, you might reflect on that delicate border between triadic and dyadic operating of officers or social system functionaries normatively forced to deal with people in terms of algorithms and thus almost unavoidably becoming inhumane if they do not resist to their dyadic degeneration.