Here's my take on this (I'm assuming that "thinking about" an idea means
that idea being active in your cognitive ecology long enough for various
associated ideas to become activated, and to interact).
I reckon that "rational" or pseudo-logical thinking is kind-of serial
information processing, running on top of (largely hidden to consiousness)
parallel processing in neural hardware.
Problem: it may be that your current cognitive state, when acted on by your
mind's Pseudo-Serial (rational processing) Algorithms (PSAs), will lead you
into a very sub-optimal rut, a mental blind alley. I reckon that "thinking
about something" involves an element of allowing you to drift through
related mental states, and apply your PSAs to those states as well. In this
way, you get to think your way on from a number of nearby starting points in
ideas-space. Which means that you stand a higher chance of avoiding the
shallow, local ruts which are those woefully sub-optimal mental conclusions.
In effect, you're using your mock-serial mind to simulate a
parallel-processing rational mind. So, in summary, that's parallel hardware,
and a pseudo-serial emergent part-rational mind, using simulated parallel
processing (generating lots of different starting points to follow chains of
thought from) to optimise results. Hopefully.
When you say "the real stagnation" I'd say "if you don't take the time to do
enough parallel searching through the space of possible ideas/solutions, the
probability that you'll end up with a VERY sub-optimal result is increased."
Wordy git, huh?
>By thinking about new memes, you can separate the (relatively to your own set,
>of course) "true" from the "false" from the "could be" etc...
This would match up with my concept of hitting deeper local minima (ruts).
Erm... in my model, a deeper rut means, closer to Accurate Representation Of
How Things Work. That's a sketchy definition...
>>Okay... now then... my feedback thing, where I'm saying that feedback is
>>like replication, is saying that feedback produces a metasystem, but it
>>takes time to do it. IE consciousness is ideas (for want of a better word)
>>emerging about the way your brain/memecology was just now... except the time
>>and space positions "now" and "in your brain" need to be smeared a bit.
>>You're never conscious of yourself BEING CONSCIOUS of HOW YOU ARE RIGHT
>>NOW... see what I'm getting at? I'm sure this is where Robin Faichney was at
>>when he was talking about potential infinite feedback loops in
>>consciousness. So /you (the fuzzy version of) now/ is... the meta-system of
>>/you a (fuzzy) moment ago/...? Am I talking shit here? Ouch...
>
>But your belief of "You're never conscious of yourself BEING CONSCIOUS of
> HOW YOU ARE RIGHT NOW..." will bring a sort of inactivity or stagnation
>from where you will never really know if don't try.
Not necessarily. Just because people will never run at infinite speed,
doesn't mean no-one likes to race and beat world records. I think the Being
C of How UR Right Now problem is a constraint of the system. As it happens I
do really like thinking about how I work, even though I kind of think that
I'm never fully aware absolutely of all that's going on.
>Logically, why would you
>even try to expand your meme set if you really don't htink such a thing is
>possible ?
By "expanding your memeset" do you mean "having/getting new ideas"?
Because I think that new ideas are recombinations of old noes, and I don't
think there's such a thing as a memeset that DOESN'T develop new
combinations of ideas as it matures/gets older. I don't think you HAVE to
try, I think it HAS to happen as a result of the way minds work.
>>>Robinson's FS has been used to make PROLOG, a language much used for
>>>AI systems, an especially for Expert Systems. The basic idea is to
>>>arrive at a new system of propositions (hope I'm not making any mistake
>>>here)but in a reasonable amount of time.
>>>
>>>The only way to achieve this is to try to falsify the new system
>>>consisting of the old one and the opposite of the hypothesis you want to
>>>absorb.
>>
>>Sorry, I didn't really hook into that bit... can you give a bit more detail?
>
>Yep, if you have a system H with {m1, m2, m3} and you're trying to see
>whether m4 can fit in a H' with H, then you'd better try to find a
contradiction
>in H'' consisting of H and NOT m4, the opposite takes way too much time.
Understand now (drew myself a diagram).
>>1 Grab hold of a stable memetic ecology and stick with it. Hmm. That's
>>a very Cartesian-dualist way of saying it. How about... you could have (be?)
>>a thinking style in which memes and other cognitive entities put competitors
>>down very fiercely. Maybe all that'd be needed for this is heavy-handed
>>lateral inhibition circuitry.
>
>Stable, yet non evolving ? Wouldn't that mean less adaptive to eventually
>drastic spatio-temporal disturbances ?
Yes... if used exclusively (which I reckon is pretty much vanishingly rare).
BUT to an extent, a lot of people lose touch, or fail to keep up with
changes, fail to adapt to new situations, and they lose their relevance,
lose their ability to achieve. Like managers who fail to adapt to new
management techniques, or people who cling to once-respected political
beliefs, or people who won't learn new skills. I'd argue that pure examples
don't exist, but that people can be lined up on continuums... so you see
relative differences between people.
>Wouldn't it mean that other meme set which expand (while remaining stable)
>would eventually take over (they should be selected because they have more
>information about their environment, and hence can predict more precisely
>any upcoming spatio-temporal disruption which could be detrimental for
>"survival").
Yes. Like, at my work, two companies' Technical departments were merging.
One of the managers was great at picking up and dealing with new projects,
and worked hard to adopt new styles of management. The other said changes to
management style were mostly crap, and wanted to carry on with the same
functions that his department had always provided. He got sideways promoted
into an essentially powerless role which looks like it'll be redundant soon,
whereas the adaptible one has been made Technical Director. AND REMEMBER-
both of them won't be ABSOLOUTE embodiments of the conflicting strategies...
but the balance got tipped...
>Or what about we can keep both strategies in mind as potentially valid
>strategies that are adapted to various situations. Henceforth, you say to
>yourself "This is my meta-strategy. I have here two apparently conflicting
>strategies, but I know they only conflict if they're used at the same time.
>But I know they can each be very effective in the appropriate situation" ?
That'd be great, I reckon. And maybe a lot of people have moments where they
manage to pull it off. But I'd gamble that almost everyone (probably
everyone) lacks the discipline to keep the meta-strategy implemented all the
time (no absolutes...) and what you see most of the time is circumstances
tipping people one way or the other. EG other people telling them to stick
to their guns, money running out in the training budget, company directives
saying that things much change, etc etc.
Dave Pape
==========================================================================
I am ready.
Phonecalls: 0118 9583727 Phights: 20 Armadale Court
Westcote Road
Reading RG30 2DF