LOGO
Reply to Thread New Thread
Old 09-01-2012, 08:12 PM   #21
Jadldqys

Join Date
Oct 2005
Posts
501
Senior Member
Default
Lo and behold as Master Algorithms themselves undergo further enhancements........then Skynet becomes aware and decides to destroy the carbon lifeforms. Silica lifeforms emerge as the victor and peace reigned on the planet. Then a question is raised on the future SSSF science forum from ASUS1234.....do the carbon lifeforms have intelligence? :-))
As evidenced by The Matrix and Borgs, silicon lifeforms are entirely aware that individual intelligence is an ego driven misnomer. I believe the question on your future SSSF would be more likely heard in a hobby philosophy group. This philosophy group, interestingly, would be focused on the study of 'carbon life-form humor', as opposed to the deep life questions that our philosophy is known for.

Jadldqys is offline


Old 09-02-2012, 02:08 PM   #22
QQ9ktYrV

Join Date
Oct 2005
Posts
446
Senior Member
Default
Not sure if this is related or not. I recently read an article about harvester ants behaving like TCP/IP (the way computers connect and transfer data over the Internet).


We all know that ants are amazing biological creations, but it was only recently that researchers discovered that a certain breed of harvester ants (Pogonomyrmex barbatus) behave in a way that is not dissimilar from the way Internet protocols discover how much bandwidth is available for data transfer.
They're calling it the 'Anternet'.

from http://www.pcworld.com/article/26151..._about_it.html

anaLoG
QQ9ktYrV is offline


Old 09-02-2012, 02:20 PM   #23
BuyNetHosting

Join Date
Oct 2005
Posts
359
Senior Member
Default
it's related,
BuyNetHosting is offline


Old 09-02-2012, 02:57 PM   #24
Narcodran

Join Date
Oct 2005
Posts
586
Senior Member
Default
I found this experiment by Google researchers that seems to be following the approach you suggest. They made a network and gave it a picture of a cat an set it to work on the internet to discover that the picture was of a cat, by itself.
I think they're doing something rather different, and maybe it's the right approach for vision. I'm going to be trying to do vision by building an intelligent system first and then trying to help it to analyse images by telling it what I can see in those images and pointing out the key things in the image which enable me to identify items within it. I've done some of the groundwork for this by processing images in various ways to find the points where things which I could recognise in the original image are no longer recognisable, thus getting clues as to which information is critical and which is not. The people from Google doing the experiment using images taken from youtube didn't train it to pick out anything specific, but it somehow focussed on cats all by itself despite having no concept of a cat to begin with. I don't know enough about their experiment to be able to work out why that happened, and I don't know how much they had to do to seed their system with basic capabilities to enable it to do anything useful either. What they now have though is just a system which can recognise cats (and perhaps a host of other things by now) - I doubt it will go on to do any useful thinking about anything relating to cats, but it could still be useful when combined with an AGI system to provide it with instant vision.
Narcodran is offline


Old 09-04-2012, 02:53 PM   #25
drycleden

Join Date
Oct 2005
Posts
536
Senior Member
Default
Computing architecture is important because the different styles of computing implement different systems with greatly differing speed of computation. So no present single von Neumann computer could ever implement a neural network as large as complex as a human brain in reasonable time. However different architectures can implement such massively large parallel interconnected networks in reasonable time.

Speaking of neural networks.. Vision is pattern recognition, with a good dollop of associative memory and pattern completion. So a part of the picture, or as you say, important bits if the picture, can tell you the full story. We associate a tigers tail with a tiger. So given a picture of just the tail hiding in a forest we can summise a tiger. Or if not, then a big cat. Expert systems tend to perform less well at this task than neural networks. It is also harder to implement a learning scheme. So often the expert system is left with the deficiencies of the original programmer. Nets tend to be innately associative, and also pattern completers, giving best guess answers to incomplete input. When that is augmented with a learning input that best guess answer should improve with learni g iterations.
drycleden is offline


Old 09-04-2012, 03:27 PM   #26
maks_holi

Join Date
Oct 2005
Posts
523
Senior Member
Default
>>>Vision is pattern recognition.."

True, but it may also be said it is 'pattern discrimination', which may not be exactly the same thing.
maks_holi is offline


Old 09-04-2012, 05:12 PM   #27
soSldI4i

Join Date
Oct 2005
Posts
475
Senior Member
Default
Computing architecture is important because the different styles of computing implement different systems with greatly differing speed of computation. So no present single von Neumann computer could ever implement a neural network as large as complex as a human brain in reasonable time. However different architectures can implement such massively large parallel interconnected networks in reasonable time.
Clearly a proper neural computer can do a lot in parallel, but the way they're programmed involves learning, and that's slow and makes each machine different, while also leaving lots of room for errors which might result in machines suffering from lapses like those people suffer from - we make a lot of errors, and that's something we want to eliminate in our machines. A lot of the structure of a neural computer also ends up being unused, so it's bulkier than it technically needs to be to do the job. I'm confident that machines which are directly programmed to do everything will end up being more efficient and eliminate performance errors altogether, added to which we're heading for multiprocessor machines which may result in 64 processors per machine being absolutely normal ten years from now, which would probably be plenty to handle all the different things an intelligent system with loads of sensory inputs and control outputs would need. Dedicated pre-processors for visual input could also simplify things, once we have a better idea of what simplifications should be made, though scientists have already worked out what the eye does to simplify visual input before sending data on to the brain. We'll just have to wait and see how it all pans out though - it may be that many of the different approaches will have their day in the sun before being wiped away by others which take a bit longer to get going but which are functionally superior.

Speaking of neural networks.. Vision is pattern recognition, with a good dollop of associative memory and pattern completion. So a part of the picture, or as you say, important bits if the picture, can tell you the full story. We associate a tigers tail with a tiger. So given a picture of just the tail hiding in a forest we can summise a tiger. Or if not, then a big cat. Expert systems tend to perform less well at this task than neural networks. It is also harder to implement a learning scheme. So often the expert system is left with the deficiencies of the original programmer. Nets tend to be innately associative, and also pattern completers, giving best guess answers to incomplete input. When that is augmented with a learning input that best guess answer should improve with learni g iterations. It may well be that expert systems are behind at the moment, but they're actually doing a pretty good job in cameras these days - I saw one on a shopping channel (accidentally of course) in which the autofocus kept a swan's head near the edge of the frame in sharp focus while allowing the rest of the bird to blurr, which makes me wonder if it was luck or if they really have programmed it to recognise parts of birds sufficiently well to be able to identify the head of a bird with a very non-standard shape. Maybe it was just luck, but they certainly do have cameras which take photos when you smile at them, and they aren't using neural nets.

Clearly expert systems depend a lot for their functionality on the ability of the people who program them, but programmers will get better at doing this as they share their tricks with each other. Finding ways to get a standard computer to learn will also have the advantage over neural computers in being something that can be copied millions of times to give millions of other machines in an instant all the functionality the first has learned over a year, while neural computers have to do all the learning themselves from scratch, having no shortcut open to them. You might even end up with a machine which sets itself up in such a way that it can't find a way to overcome a fault which it started off with after making a change to something which was initially maintained and strengthened because it half worked and which can't be undone without losing functionality. A lot of people are probably shackled in their thinking in all manner of ways for the same reason (I'm not referring to anyone here - I'm thinking primarily about politicians). Even so, for vision it isn't so essential for the system to be error-free as it can always take another look at things from another angle and see what's really there, and expert systems will also make recognition mistakes anyway as a lot of things will be uncertain and depend on probabilities. So, we'll just use whatever works best at the time. It's going to be fun.
soSldI4i is offline


Old 09-08-2012, 04:07 PM   #28
LfYaRf1S

Join Date
Oct 2005
Posts
506
Senior Member
Default
64 processor machines common in ten years. Hmmm, maybe but I doubt it will be common.

Von Neumann architecture is not inherently parallel. Just adding processors doesn't get you the speed increase for implementing neural nets that you would want. You need an architecture that allows for better parallelism and asynchronous computing. Data flow was being looked at 20 odd years ago. Don't know if anything else has happened in the world of architecture design since.
LfYaRf1S is offline


Old 09-09-2012, 12:47 PM   #29
mortgrhhh

Join Date
Oct 2005
Posts
320
Senior Member
Default
64 processor machines common in ten years. Hmmm, maybe but I doubt it will be common.

Von Neumann architecture is not inherently parallel. Just adding processors doesn't get you the speed increase for implementing neural nets that you would want. You need an architecture that allows for better parallelism and asynchronous computing. Data flow was being looked at 20 odd years ago. Don't know if anything else has happened in the world of architecture design since.
I wasn't thinking about implementing neural nets. If you split up an image into many parts and send them to 64 processors, you really can speed up the processing of fine detail in the image by nearly 64 times. I'm not sure if a single processor using the most efficient algorithm possible would be fast enough to match our standard of vision without a substantial reduction in the size of its components (and things have slowed down recently in that regard, hence the move to multiprocessing), but I'm confident that it could be done with 16 of them, based on some of the image processing work I've done and the fact that our eyes only see the centre of the scene in focus - I've been working with images which are pin sharp from edge to edge, so that means there's a lot being processed.

A lot of other work can be done in parallel by splitting tasks into chunks, such as hearing - a lot of processing has to be done sequentially to extract individual sounds from complex input and thus simplify what remains, but you can chop the input up into chunks and send them to different processors to process sequentially such that the inability of a single processor to keep pace with the total amount of input isn't a problem. The important thing is to ensure that all the processors are being used efficiently and not spending most of their time sitting idle waiting for locks on shared resources to be removed. I haven't started working with multiprocessing yet in my operating system, but I've been keeping an eye on what other OS designers are up to and the mistakes they've been making which can often result in four processors only providing a doubling in performance.
mortgrhhh is offline



Reply to Thread New Thread

« Previous Thread | Next Thread »

Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

All times are GMT +1. The time now is 09:34 AM.
Copyright ©2000 - 2012, Jelsoft Enterprises Ltd.
Search Engine Friendly URLs by vBSEO 3.6.0 PL2
Design & Developed by Amodity.com
Copyright© Amodity