Archive

Archive for the ‘COMPUTER_TECH_FOCUS’ Category

Cloud computing (in plain english)

December 13, 2011 Leave a comment
Author shahabaz:
Here listen to the audio for good under standing:

                               http://www.4shared.com/embed/1016635185/b6560930

                                               www.icloud.com        www.cloudos.com

The Cloud Computing buzz is growing every day with a growing number of businesses and government establishments opting for cloud computing based services. On December 14, 2009, the City of Los Angeles (United States) switched to cloud computing. It equipped 34,000 city employees with Google Apps for eMail and collaboration in the cloud. The other U.S. cities already on Google Apps cloud are Washington D.C. and Orlando.

There is no dearth of Cloud Computing based services if you are ready to pay. But we all love free stuff on the Internet, and free stuff is often just as useful as the stuff you pay for. In fact, perhaps the best Internet innovation ever is absolutely free. If you guessed Google search, then you guessed correctly! I stumbled upon some excellent free software and services, though you can always debate on what makes them cloud based. Cloud Computing is still fuzzy and everyone has her or his own definition.

Update: Cloud Computing definition is no longer fuzzy. NIST (National Institute of Standards and Technology, an agency of the U.S. Department of Commerce) has come up with standard guidelines & is now widely accepted in the industry. I’ve simplified them briefly in the following post:

AT&T Hackers Arrested After Funneling $2 Million To Terrorists

November 28, 2011 Leave a comment

Filipino police and the FBI arrested four people who allegedly hacked AT&T’s phone systems as part of a plan to funnel money to terrorists, escalating the debate over data security.

The suspects were reportedly planning to use the hacked information to divert funds to a Saudi-based terror group, and are identified as part of a group that helped finance the deadly 2008 terrorist attacks in Mumbai, India.

The suspects reportedly hacked the phone lines of different telecommunications companies, including AT&T, and diverted money stolen from the intrusion to bank accounts belonging to the terrorists. These terrorists paid the Filipino hackers on commission, according to the Philippines’ Criminal Investigation and Detection Group, or CIDG.

The hacking resulted in almost $2 million in losses incurred by AT&T, according to the CIDG. A FBI spokeswoman said hackers targeted customers of AT&T, not the carrier itself, but didn’t elaborate because the agency does not discuss investigations.

AT&T’s spokesperson declined to comment on information the scheme generated $2 million in bogus charges.

Last week, AT&T e-mailed select customers to inform them of an attempted hack on its database after its network temporarily went down in the northeast, advising them to be cautious of suspicious e-mails or text messages that ask for sensitive information.

At that time, the Dallas, Texas-based company said no accounts were breached and the less than one percent of the company’s customers were contacted, but the carrier believed it was an “organized attempt to obtain information on a number of customer accounts.”

It is unclear if the attempted hack is related to the recent arrests, or a separate incident.

The news also follows a formal Pentagon report to Congress on cyber-warfare earlier this month, which laid out a more specific role for the U.S. military in the event computer-generated attacks threaten the nation’s economy, government, or armed forces.
.
The arrests in the AT&T case indicate cyber-attacks are on the rise, and companies are looking to the Department of Defense to shore up defensive strategies.

Experts predict future terrorist attacks worldwide may originate in cyberspace, and warn they may bring down financial institutions, industrial compounds and other critical systems. The DoD is working to boost defensive measures against cyber-attacks in addition to more effectively identify their sources to make cyber-terrorists pay for their actions.

In this latest case, officials in Manila are pointing the suspects’ possible links to terror networks connected to Al Qaeda. Filipino police said the country’s weak laws against cyber-crime contribute to its attractiveness as a base for these kinds of syndicates.

AT&T Hackers Arrested After Funneling $2 Million to Terrorists originally appeared atMobiledia on Mon Nov 28, 2011 12:33 pm.

Categories: COMPUTER_TECH_FOCUS

AMD introduces branded memory modules for desktops

November 28, 2011 Leave a comment

Advanced Micro Devices’ first branded desktop system memory modules, called AMD Memory, will be available in North America through major retailers, the company said Monday.

By offering its own branded memory modules for desktops, AMD aims to “take the guesswork out of DRAM selection, providing an easy and straightforward experience when looking for the ideal match for gaming or multimedia PC needs”, it said in a statement.

AMD has been supplying and validating memory for AMD Radeon graphics cards for several years, and saw an opportunity to add system memory to its product line, AMD said.

The company did not immediately respond to requests for comment on whether it will offer branded memory modules for other kinds of computers besides desktops.

It said it is collaborating with memory module makers to create the AMD Memory branded products from components qualified to meet certain specifications. By testing and certifying the memory components, end-users can be assured of compatibility with AMD platforms, it added.

AMD has partnered with memory modules company Patriot Memory in Fremont, California, and PC enhancement products firm VisionTek Products in Illinois. VisionTek, which is already an authorized reseller for AMD products, will distribute the AMD modules through its distributor D&H.

AMD Memory branded products will be available soon through major retailers like Amazon.com, Fry’s, Memory Express, and Best Buy in Canada, AMD said.

The company said it has used the AMD OverDrive performance optimization tool to test and optimize DRAM with the company’s APUs (Accelerated Processing Units), CPUs (Central Processing Units), GPUs (Graphics Processing Units) and chipset platforms.

AMD Memory is available at three different sizes, 2GB (gigabyte), 4GB and 8GB, in a range of price points and speeds, AMD said. For the entertainment category, it will feature 1333 MHz and 1600 MHz speed RAM. A performance version supports speed up to 1600 MHz with low latency and comes in matched pairs, while Radeon Edition DRAM will run at 1866 MHz, AMD said.

Categories: COMPUTER_TECH_FOCUS

TED sixth sense technology

August 12, 2011 Leave a comment

TED sixth sense technology

Boffins Beam 800 mbps Wireless Network From Flashlight!

August 3, 2011 Leave a comment
German researchers have found out how to fill a room with 800 megabits per second of wireless data employing an inexpensive LED setup. Researcher Klaus-Dieter Langer said, ”Using red-blue-green-white light LEDs, we were able to transmit 800Mbit/s in the lab. That is a world record for the [visible light communication] method.”

Langer, operating at Berlin’s Heinrich Hertz Institute, a branch of Fraunhofer Institute for Telecommunications, also accounted an earlier experiment that reached 100Mbit/s using only white-light LEDs. ”We turn the LEDs off and on in very rapid succession and transfer the information as ones and zeros,” he explained. “The modulation of the light is imperceptible to the human eye. On the receiving end is a simple photo diode, and circuitry that changes the diode’s signals into a digital data stream. According to Langer, advantages of this method admit the simplicity of changing the LEDs into signal-sending devices, and the riddance of cabling as a signal-transferring medium.
In the 100 Mbit/s experiment, the signalling LEDs were situated on the ceiling, and the transmission was error-free in an area of 10 square meters, received by a group of four photo diode–fitted laptops. HHI researcher, Anagnostis Paraskevopoulos said, ”We transferred four videos in HD quality to four different laptops at the same time.” Applying visible light as a signaling medium instead of radio waves has clear advantages in areas such as hospitals and aircraft where radio transmission is not possible and where cabling would be discouragingly expensive.
The most evident disadvantage of visible light communication (VLC) is that the signal can be easily barred by any solid object, for example a hand moving between the LEDs and the photo diode. The unfitness for light to penetrate walls also limits VLC to special-case scenarios. As limited as VLC may seem, when The Reg spoke in June with Aicha Evans, wireless engineering manager at Intel, she stated that “a lot of people are talking about visible light.”
Though Evans accepted that “it’s still science fiction,” VLC may very well show its value in the last few meters of a data stream. If at this early phase of its evolution it’s already being demonstrated at 800Mbit/s speeds, VLC may very well prove to be a functional high-speed, within-four-walls broacast WLAN in future executions.

What is Thunderbolt?

July 7, 2011 Leave a comment

Powerful technology from a powerful collaboration.

Thunderbolt began at Intel Labs with a simple concept: create an incredibly fast input/output technology that just about anything can plug into. After close technical collaboration between Intel and Apple, Thunderbolt emerged from the lab to make its appearance in the new MacBook Pro and the new iMac.
Intel co-invented USB and PCI Express, which have become widely adopted technologies for data transfer. Apple invented FireWire and was instrumental in popularizing USB. Their collective 
experience has made Thunderbolt the most powerful, most flexible I/O technology ever in a personal computer. 

One small port. One giant leap in possibilities.

Both MacBook Pro and iMac now give you access to a world of high-speed peripherals and high-resolution displays with one compact port. That’s because Thunderbolt is based on two fundamental technologies: PCI Express and DisplayPort.
PCI Express is the technology that links all the high-performance components in a Mac. And it’s built into Thunderbolt. Which means you can connect external devices like RAID arrays and video capture solutions directly to MacBook Pro or iMac — and get PCI Express performance. That’s a first for any computer. Thunderbolt also provides 10 watts of power to peripherals, so you can tackle workstation-class projects on the go with MacBook Pro or from your home office with iMac. With PCI Express technology, you can use existing USB and FireWire peripherals — even connect to Gigabit Ethernet and Fibre Channel networks — using simple adapters.
And because Thunderbolt is based on DisplayPort technology, the video standard for high-resolution displays, any Mini DisplayPort display plugs right into the Thunderbolt port. To connect a DisplayPort, DVI, HDMI, or VGA                   display, just use an existing adapter.

Performance and expansion made ultrafast and ultrasmart.

Thunderbolt I/O technology gives you two channels on the same connector with 10 Gbps of throughput in both directions. That makes Thunderbolt ultrafast and ultraflexible. You can move data to and from peripherals up to 20 times faster than with USB 2.0 and up to 12 times faster than with FireWire 800. You also have more than enough bandwidth to daisy-chain multiple high-speed devices without using a hub or switch. For example, you can connect several high-performance external disks, a video capture device, and even a Mini DisplayPort display to a single Thunderbolt chain while maintaining maximum throughput.

High-Speed I/O Performance

US
480 Mbps
FireWire
800
800 Mbps
Express
Card
2.5 Gbps
USB 3.0
5 Gbps
Thunderbolt
Ch. 110 Gbps
Ch. 210 Gbps 

No project is too massive.

Now you can create a professional video setup for your 
MacBook Pro or iMac, just as you would for your Mac Pro.
 If you’re a video editor, imagine connecting high-performance storage, 
a high-resolution display, and high-bit-rate video capture
 devices to handle all the post-production for a feature film — right
 on your MacBook Pro or iMac. Thunderbolt I/O technology allows you 
to daisy-chain up to six new peripherals — such as the Promise Pegasus RAID 
or LaCie Little Big Disk1 — plus anApple LED Cinema Display.
And that’s just the beginning. With Thunderbolt technology, 
peripheral manufacturers finally have what they need to take
 high-performance devices from workstations and top-of-the-line
 desktops to just about any computer.

How computers can cure cultural diabetes

July 5, 2011 Leave a comment


The networked computer offers an antidote to the junk culture of broadcasting. Why not choose the healthy option?
Using computers to create will benefit human culture (Image: Jasper James/Getty)
THINK of those fleeting moments when you look out of an aeroplane window and realise that, regardless of the indignities of commercial air travel, you are flying, higher than a bird, an Icarus safe from the sun. Now think of your laptop, thinner than a manila envelope, or your cellphone nestled in the palm of your hand. Take a moment or two to wonder at those marvels. You are the lucky inheritor of a dream come true.
The second half of the 20th century saw a collection of geniuses, warriors, pacifists, cranks, entrepreneurs and visionaries labour to create a fabulous machine that could function as a typewriter and printing press, studio and theatre, paintbrush and gallery, piano and radio, the mail as well as the mail carrier. Not only did they develop such a device but by the turn of the millennium they had also managed to embed it in a worldwide system accessed by billions of people every day.
The networked computer is an amazing device, the first media machine that serves as the mode of production (you can make stuff), means of distribution (you can upload stuff to the network), site of reception (you can download stuff and interact with it), and locus of praise and critique (you can talk about the stuff you have downloaded or uploaded). The computer is the 21st century’s culture machine.
But for all the reasons there are to celebrate the computer, we must also tread with caution. This is because the networked computer has sparked a secret war between downloading and uploading – between passive consumption and active creation – whose outcome will shape our collective future in ways we can only begin to imagine. I call it a secret war for two reasons. First, most people do not realise that there are strong commercial agendas at work to keep them in passive consumption mode. Second, the majority of people who use networked computers to upload are not even aware of the significance of what they are doing.
All animals download, but only a few upload anything besides faeces and their own bodies. Beavers build dams, birds make nests and termites create mounds, yet for the most part, the animal kingdom moves through the world downloading. Humans are unique in their capacity to not only make tools but then turn around and use them to create superfluous material goods – paintings, sculpture and architecture – and superfluous experiences – music, literature, religion and philosophy. Of course, it is precisely these superfluous things that define human culture and ultimately what it is to be human. Downloading and consuming culture requires great skills, but failing to move beyond downloading is to strip oneself of a defining constituent of humanity.
For all the possibilities of our new culture machines, most people are still stuck in download mode. Even after the advent of widespread social media, a pyramid of production remains, with a small number of people uploading material, a slightly larger group commenting on or modifying that content, and a huge percentage remaining content to just consume. One reason for the persistence of this pyramid of production is that for the past half-century, much of the world’s media culture has been defined by a single medium – television – and television is defined by downloading.
Television is a one-way spigot gushing into our homes. The hardest task that television asks of anyone is to turn the power off after they have turned it on. The networked computer offers the first chance in 50 years to reverse the flow, to encourage thoughtful downloading and, even more importantly, meaningful uploading.
What counts as meaningful uploading? My definition revolves around the concept of “stickiness” – creations and experiences to which others adhere. Tweets about celebrity gaffes are not sticky but rather little Teflon balls of meaninglessness. In contrast, applications like tumblr.com, which allow users to combine pictures, words and other media in creative ways and then share them, have the potential to add stickiness by amusing, entertaining and enlightening others – and engendering more of the same. The explosion of apps for mobile phones and tablets means that even people with limited programming skills can now create sticky things.
The challenge the computer mounts to television thus bears little similarity to one format being replaced by another in the manner of record players being replaced by CD players. It is far more profound than that, because it can bring about a radical break from the culture of television and a shift from a consumption model to a production model.
This is a historic opportunity. Fifty years of television dominance has given birth to an unhealthy culture. Created like fizzy drinks and burgers by multinational conglomerates, the junk culture of broadcasting has turned us into intellectual diabetics. The cure is now in our collective grasp. It involves controlling and rationing our intake, or downloading, and increasing our levels of activity – uploading. Not to break it down too much, watching is ingesting is downloading and making is exercising is uploading.
Of course people will still download. Nobody uploads more than a tiny percentage of the culture they consume. But the goal must be to establish a balance between consumption and production. Using the networked computer as a download-only device, or even a download-mainly device, is a wasted opportunity of historic proportions.

Emerging technology to design compact computers: Nano

July 5, 2011 Leave a comment

The latest trend in electronic device industry points towards the compact and fast devices and to attain the same requirement, scientist from US are working on a novel technology-Nano technology.

The future processors are suppose to be more powerful and use less energy and a team at California University is working on a silicon wafers between five and 20 nanometres thick, which may be used in the designing of IC(Integrated Circuit) chips. The novel technology is based upon BCP(block co-polymer lithography).

According to the reports, microprocessor companies like Intel and IBM have invested billions of dollars in developing the new technology.

Scientists Developing Robotic Hand of the Future

July 5, 2011 Leave a comment

Researchers at Carlos III University of Madrid’s (UC3M) Robotics lab are participating in the international research project known as HANDLE. The objective of the project is to create a robotic hand that can reproduce the abilities and movements of a human hand in order to achieve the optimal manipulation of objects
HANDLE is a large scale “Integrated Project” that is part of the Seventh European Framework Programme FP7; Spain is a participant in the project, whose goal is to reach an understanding of how humans manipulate objects in order to replicate its grasping and movement abilities in an artificial, anthropomorphic articulated hand, thus endowing it with greater autonomy and producing natural and effective movements. “In addition to the desired technological advances, we are working with basic aspects of multidisciplinary research in order to give the robotic hand system advanced perception capabilities, high level information control and elements of intelligence that would allow it to recognize objects and the context of actions,” explains the head researcher on the UC3M team working on this project, Mohamed Abderrahim, of the Madrid university’s Department of Systems Engineering and Automation.
His team has already gotten very good results, in his opinion, in the areas of visual perception, and cinematic and dynamic systems, which allow the system to recognize an object in its surroundings and pass the information on to the robotic hand’s planning and movement system.
The robotic hand that these researchers are working with is mostly made up of numerous high precision pieces of mechanized aluminum and plastic, as well as sensor and movement systems. In all, it has 20 actuators and can make 24 movements, the same as a human hand. Its size is also the same as that of an average adult male’s hand and it weighs approximately 4 kilograms. According to the partner in the project who manufactures the hand, the approximate cost of the version that is currently in development at UC3M comes to about 115,000 euros.
The problems involved in imitating a hand
When trying to recreate the movements of a human hand with a robotic system, there are several complex problems that must be resolved. In the first place, there is a lack of space. This is because “a human hand is incredibly complete, which makes it a challenge to try to put all of the necessary pieces into the robotic hand and to integrate all of the actuators that allow for mobility similar to that of a human hand,” comments Professor Mohamed Abderrahim. Second, another problem is that there are currently no sensors on the market that are small enough to be integrated into the device so that it can have sensitivity similar to that of a human hand and, thus, be able to make precise movements. Finally, although the researchers may manage to make a perfect robot from the mechanical and sensorial point of view, without intelligence elements the device will not be able to function autonomously nor adapt its movements and control to the characteristics of the objects, such as their geometry, texture, weight or use.
“It is not the same to take hold of a screwdriver to pass it to someone, or to put it away, as it is to use it, because in the last situation, it has to be reoriented in the hand until it is in the right position to be used. This position has to be decided by the intelligence part of the robotic hand,” the researchers say. “A robotic hand that is able to perform this seemingly simple task autonomously,” they say “only exists in science fiction movies.” “My personal estimation is that it will take around 15 years of research into these areas to build a robotic hand that is able to perform certain complex tasks with a level of precision, autonomy and dexterity that is similar to that of a human hand,” predicts Professor Abderrahim.
The research carried out by the HANDLE project’s partners has brought about results that are very interesting in the area of visual perception, motion planning, new sensors, acquisition of motor skills using artificial intelligence techniques, etc. Nevertheless, important challenges still remain when it comes to integrating the results obtained by all of the partners into a single system, which will be the result of the next two years of work.
HANDLE (Developmental pathway towards autonomy and dexterity in robot in-hand manipulation) is a Large Scale “Integrated Project” funded by the European Union within The Seventh Framework Programme FP7, in which nine European institutions, coordinated by the Pierre and Marie Curie University of Paris (France), participate