dc: (Doctor)
[personal profile] dc
1. Hmm. In some ways, I’m not feeling too bad at the moment, although my sinuses (well, one of them) are still throbbing a bit after that recent URTI. What is much more troublesome is the energy problem just now: I seem to be running on about half my normal available energy most of the time, and, most days recently, it just plummets mid-afternoon. It’s getting so I practically pass out for a couple of hours. Not good, and a bit troubling given that we are just over two weeks away from Eastercon. I’m thinking that it might be a bit stupid to volunteer for anything, which I had been thinking I would like to do. Bugger.

2. My ribs are aching at the moment, but that’s because we have spent the past week catching up with Green Wing, which we unaccountably missed the first time it was shown. Good that C4 are reshowing it before series two starts on Friday; I don’t think it would make a lot of sense if you just dropped into it in the middle (although making sense is not really the first thing that comes to mind in connection with it). It probably should be slightly troubling that it reminds me of hospitals I have worked in, as well as some people I have come across.

3. One thing I have been wondering about is future technology, and how the way SF writers today see it will look in 30 or 40 years’ time. I suspect that it probably won’t look quite so dated as some of the stuff in SF from the 1960s — but who knows, perhaps it will. What prompted this is that two of the books I’ve read recently were written in the 60s: Solaris, by Lem (who died on Monday), and The Long Result by John Brunner.

Both these books are set some time in the future, the Brunner at least two or three hundred years, Solaris possibly further (it’s pretty indeterminate, but the planet itself was discovered something like a century and a half before the book starts, and it is clear there was already an established spacefaring culture then). Despite that, the most sophisticated forms of information storage envisioned by the authors are tapes and microfiche. It doesn’t impede enjoyment of the books, it just seems so… quaint. There is less of the jarring technology in The Long Result, probably because Brunner was actually thinking about the technology; Lem doesn’t seem to have been interested in that, what he was interested in was the encounter with the incomprehensible. For that reason, the occurrence of archaic technology isn’t so distracting as it might have been in another setting. Still, whenever I go back to Solaris, I find I have forgotten that this was published in 1960.

Things like the microfilm libraries, tape recorders, having to wait for the valves to warm up on a transmitter: these are all understandable given 1960s technology. I do sometimes wonder, though, what was in Lem’s mind when he envisioned a future where mankind has starships travelling far from Earth (Solaris being a planet of a binary star system hidden from Earth’s view by an interstellar dust cloud), and has the technology to float a scientific station over Solaris on gravitors, yet the protagonist gets from the starship to the station in a capsule which seems no more advanced than a Mercury or Voshkhod, even descending to the station on parachutes (which is incredible, really). But, as I say, he wasn’t focussed on the technology.

Looking around the stuff being published today, I wonder what will look quaint in 2046?

4. I know this is hardly a unique sentiment, but if I hear one more fuckwit film producer or academic type say of some piece of SF (literary or cinematic) that it isn’t really SF because it isn’t about spaceships and laser pistols or doesn’t involve expensive special effects, I’m inclined to start finding out names and addresses and set about purchasing things like baseball bats.

5. I am very amused that the idiot City Manager (what is that, anyway?) [livejournal.com profile] wibbble spoke of the other day has grasped that he has now been the subject of an article in The Register (I know, he was directly told, by [livejournal.com profile] wibbble if no one else, which I doubt, but even so I would not put money on him understanding that), and that as a result, many, many geeks are emailing him. He doesn’t like it and wants El Reg to make it stop. No, he didn’t say please.

6. Probably just as well I am low on energy at the moment, as it seems those bastard Tories and the Lords have caved on the ID card bill. Now, it would be good if the LibDem leader were to go around making a great ruckus about this and the Legislative and Regulatory Reform Bill, to try to get across to the public what exactly this government is up to. Fat chance, I think. I wonder what the SNP will have to say on it.

7. And so to bed.

(no subject)

Date: 2006-03-29 09:45 pm (UTC)
wibbble: A manipulated picture of my eye, with a blue swirling background. (Cryptic Eye)
From: [personal profile] wibbble
Some of the best science fiction, IMO, doesn't date technologically because the stories aren't about mass storage media, or whatever. Things like that are just props.

On the other hand, some of it doesn't date because it's /really/ good, like Asimov's robot stories. These are all stories that hang off of the technological element, but which haven't dated because of it. The technobabble might not be as cutting edge ('positronic brain' is a term that's suffered due to Star Trek, mostly), but the ideas behind them still /are/ cutting edge.

After watching Battlestar Galactica, E was talking about the origin of the Cylons in the new series (developed essentially as slaves) and wondering why people never seem to build safeguards into their robots, which led to a discussion of the three laws of robotics. From my time in AI, I know that these things are considered in academic circles even now - if we create AI, is it safe to not encode the three laws in it? If it create an intelligent artificial being, isn't it immoral /to/ encode the three laws in it, making it a slave forever?

Fun stuff.

(no subject)

Date: 2006-03-29 11:56 pm (UTC)
From: [identity profile] tanngrisnir.livejournal.com
I’d say just about all good (never mind the best) SF isn’t about the technology. On the other hand, if you are looking at a supposedly very technologically advanced society and a bit of equipment fails because of a valve going, or people are agonising over getting time on a computer, or the best storage people have is tapes or microfilm, it brings you up short. I suppose what interests me is the assumptions we make about the possible. In the 60s and even the 70s, you would look long and hard for anyone who had the slightest idea that computers would ever be other than vast, room-occupying machines, and access to them a preciously guarded commodity.

I don’t know if you have ever seen the BBC series Moonbase 3 (which, come to think of it, was made in 1973). It is set on the European Moonbase (bases One and Two are American and Russian, obviously) in 2003. If you see it now, it is quite peculiar (apart from the production values, which were actually pretty standard for 1973) for two reasons: one is the continuing presence of the Cold War, the other is the level of technology. The scientists are always agonising over getting more time on the computer, and there is no notion of the sort of storage media we actually have now, where any individual can have music and films in compact media. I do wonder what assumptions we make now about technological development which will be false.

One which you refer to, robots, is interesting. They were nearly ubiquitous in SF 30-plus years ago. (Both Solaris and The Long Result incorporate them in their worlds.) The assumption that they would be humanoid creations doesn’t seem to fit with the way technology is actually going.

The Three Laws thing is a good question; if we get that far, it probably is a good idea to have some safeguard like that. Although, come to think of it, it might not prevent “robots” from taking over for our own good, à la Gort.

(no subject)

Date: 2006-03-30 06:59 am (UTC)
From: [identity profile] hermi-nomi.livejournal.com
...if I hear one more fuckwit film producer or academic type say of some piece of SF (literary or cinematic) that it isn’t really SF because it isn’t about spaceships and laser pistols or doesn’t involve expensive special effects, I’m inclined to start finding out names and addresses...

Lol. I have read several discussions relating to this very issue (most are probably on chronicles-network.com. I can't be more precise as I have a terrible memory. But I quite agree with you (even though my preferance is for fantasy.) Sci-fi is more than space exploration and stun guns, It's like these people who say 'but it can't be fantasy ~there's no swords or sorcery'. To me sci-fi is about how technology is used and dealing with advanced societies ... actually if I continue with this I'd really show myself up.

And then there is the thing with id cards. I heard that from 2008 we'll be able to spend between £30 to £90 to have all our personal security details put onto a card, just for someone to nick it ~ if we chose; and by 2010 it'll be compulsory to pay to have the risk of someone accessing all our personal details. As if we really have a choice. By 2010 I'll be a criminal 'cos there's no way I'm forking out good money to be put under the thumb like that in a free country. I keep thinking about Minority Report while writing this ~ especially the part where Tom Cruise has to get a new set of eyes 'cos his eye patterns are on record ... I've never held such views before, but I'm beginning to think that the government is gearing more and more towards some sort of 1984 society. I have serious worries that by 2050 we'll be living under some sort of dictatorship and all because the gov. doesn't want to be seen to be unfair to 'minority' groups. Yet while it's doing that it is undermining us and creating a new minority group ... and I don't want anyone to be able to take over my identity using details that are intrinsicaly mine. Id theft is bad enough without been able to access thumb prints held on record in a card! If that isn't a science fiction concept ~ I don't know what is. I realise I've ranted a little bit ... pick me up on anything I haven't been clear about :embaressed:

Oh, and would something we have created really be able to do anything for 'our own good'? Wibble asks if it's moral to impose restrictions on AI that we've created. I'd say yes. It would be our responsibilty. AI is AI, unless it can be proven to be as sentient as humans

(no subject)

Date: 2006-03-31 12:43 am (UTC)
From: [identity profile] tanngrisnir.livejournal.com
AI is AI, unless it can be proven to be as sentient as humans

That’s an understandable viewpoint; the snag is, how do you demonstrate sentience? If a machine passes the Turing test, is it sentient or just well-enough programmed to deceive a human? How do you demonstrate people are sentient (as opposed to just assuming they are because they’re human)?

Tricky territory.

Proving sentience

Date: 2006-03-31 05:53 am (UTC)
From: [identity profile] hermi-nomi.livejournal.com
Philosophy debate :-))
I can't say much right off the top of my head as I would probably end up talking out of my a*se, but I would say that you could demonstrate that an AI lifeform is a sentient being if it can behave beyond it's programming. Of course, if it's programme is to act beyond its programme, then sentience would be even harder to prove. (Isn't that what the Turing Test asks? Could you remind me?) Or you could get abit more Metaphysical(?) by saying that an AI lifeform demonstrates senitence when it is capable of making moral decisions (a la Dorlf in Pratchett's Feet of Clay) The ability to make a moral choice is, I think, what separates us from all other lifeforms. This ability is what makes us (apparently) 'superior'. It is the responsibility factor of being a moral being that explains why so many people turn their backs on morality. ... But then, if morality is a choice then even if you programmed AI to make moral choices between right and wrong, a senitent AI may choose not to follow your morality (a la God and Adam and Eve) which would surely prove that the being is sentient(?)

Re: Proving sentience

Date: 2006-03-31 09:11 am (UTC)
From: [identity profile] tanngrisnir.livejournal.com
The Turing test: can a human distinguish a machine from another human in conversation? This is usually done by setting up a text conversation so that there's no suggestion you are testing the efficacy of voice synthesis rather than anything else.

How do you recognise a moral decision? It is possible to make choices which might be interpreted as moral out of self-interest or even pure logic, depending on the circumstances. If I see you do something, say help someone you don't know in a way that is some inconvenience to you, perhaps I have seen you do something rooted in a moral sense; or I may have seen you do something because you think it will get you something (a job, a hot boyfriend, whatever). From the outside, there is no way of telling what is going on in your head. You'll find that some people will tend to ascribe moral or admirable motives to others' actions while other people will always tend to assume the basest motives; but there is no way of proving either group is more right than the other.

October 2019

S M T W T F S
  12345
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags