Wednesday, February 11, 2009

Make Money with blog or Free Host

Make money blogging is fun: you enjoy blogging and getting money from blogging. As long as "making money with your blog" is concerned, it doesn't matter matter actually whether you enjoy or not. The way generating money from blogging is the same regardless of whether you enjoy blogging or painfully doing. Let's get to the point. Every business has the same process. That is, make potential customer come to you and sell your product, service or information. Paraphrasing it into blogging term, let the readers come to your blog and make them click the link like "Buy now", "Join Now", "Adsense Ads", etc. We call readers as traffic. And about how many out of your readers click the link refer we as conversion rate. Of course, the higher the conversion rate, the better. Specifically, make-money blogging follows for steps: Step 1: You blog, if not, start blogging. Step 2: Bring traffic to your blog. Step 3: PREsell with your blog. That is, your blog content attract your reader to do some kind of action: buy your own product or go to the link you prepared, for example. But did you notice the word "PREsell"?. That's the most important thing in your make-money blogging plan. Many experienced Internet marketers say that people are not buying anything when you extensively try to sell something. This makes sense if you recall your own experience. Did you buy something when a blogger tried to sell something? Maybe not! Instead, give your readers useful information and create the warm mood by giving them the benefits of product that you have in mind. And then, only then place a link so that your readers click through. This is so called PREsell. Step 4: Monetize: This is your final goal and the monetization has two streams. -Sell your own product, service or information, if you have one. -For the blogger who don't have their own product, it's still possible to earn money from their blog thanks to the many affiliate programs among which most popular one is "Google Adsense". But it's just one example out of hundred affiliate programs. if not thousand. In short, make-money-blogging is doing step-by-step action: that is, you blog, bring traffic, PREsell your blog readers and finally monetize. Without bringing traffic and achieving PREsell, it is not possible to monetize your blog. That's why you need to keep in mind the fact that make-money-blogging is step-by-step action. Due to the space limitation, I can't explain all the detail. If you want to know more about each step, you can read it from the article, what blog can make money?.

Author Info: Ryen Kim is an MBA, specialized in marketing research and analysis. His current focus is blogging and home based affiliate business as a work-at-home scheme. He is a founder of Blog Sites and running Hom Based Affiliate Business web site.

This Article was taken from Eqqu: http://www.eqqu.com and can be found in EqquArticles located at: http://www.eqqu.com/articles/article-How_to_Make_Money_With_Your_Blog-511.html

Read More......

Paid Search Use Lifetime Visitor Values To Raise Bids While Increasing Profits

It’s easy to set PPC bids based upon a direct response ROI. Using this simple metric makes it simple to base every keyword bid on its profit and conversion rate. However, are those bids accurately reflecting the entire revenue your PPC account generates?

To get a complete picture of revenue, and to make sure you aren’t underbidding and leaving profit on the table on some of your most important keywords, you should ask yourself the following questions:

Do consumers who buy from your site usually come back and buy again? Do you know how often? Are the purchases more or less the second time around?

Do you have recurring revenue from a sale such as a subscription or ongoing fee? Do you know how long those consumers stay subscribed to your products?

If you don’t have answers to those questions, you may be making one of the most common mistakes in bidding: not using Lifetime Visitor Values.

When determining your PPC bids, the general calculation is:

(Revenue per conversion / desired ROI) X (conversion rate) = Max Bid

This calculation makes it very straightforward to set keyword or AdGroup level bids that meet your advertising goals. To increase your bid, you need to increase your conversion rate, revenue per conversion, or lower your ROI expectations.

Of course, there are others who bid by profit, return on ad spend, etc. Each of these is a good way to bid depending on your advertising goals and how you wish to measure the effectiveness of your ad dollars.

However, for websites that generate additional revenue past the first sale, the Max CPC formula should be adjusted to:

((Revenue per conversion)+(Repeat sales x average repeat sale amount)/(desired ROI))X(conversion rate)=Max Bid

This is a fairly simplistic look. Let’s see how those formulas adjust your max bid.

If you made the assumptions that:

* Your average conversion rate was 2%
* Your average first sale was $100
* Your average repeat sale was $50
* Your average customer bought from you 3 times
* You desire a 200% ROI

In the first formula for max bid, the numbers would look like:

* (Revenue per conversion / desired ROI) X (conversion rate) = Max Bid
* ($100/200%)x(2%)=$1

In the adjusted formula, the numbers now look like:

* (((Revenue per conversion)+(Repeat sales x average repeat sale amount))/(desired ROI))X(conversion rate)=Max Bid
* ((($100)+(3x$50))/200%)x(2%)=$2.50

Changing the bid from $1 to $2.50 actually maintains the same ROI for your company. However, if you took into account total profit, the numbers would also continue to climb.

The most expensive marketing cost is acquiring your customers in the first place. The cost of the first relationship with a customer far outweighs the cost of re-marketing to someone who has placed trust in your company.

Why does this matter?

Let’s use a more real life example to illustrate moving from a single point of ROI calculation to using a Lifetime Visitor Value bid calculation.

One of my favorite industries to highlight why including total customer revenues is extremely important is the ultra competitive hosting industry.

If you think about a hosting service with these numbers:

* $10/month hosting
* 5% conversion rate
* 200% desired ROI

The max bid for any single PPC account would be $0.25. (($10/200%) x (5%)= $0.25).

A $0.25 bid in the web hosting industry will not bring you very much traffic, as your ad will be buried several pages deep within the search results. Even with all the upsells that can be layered onto an account (email, storage, domains, privacy, bandwidth, website builders, etc), the numbers will never appear to work well. While someone will technically meet their desired ROI, the total profits will be so low that the company will not be around long.

However, if you added one crucial piece of information to the above numbers, subscription length, the dynamics of bidding and profit suddenly change. If our hypothetical hosting company knew that their average account stayed with them for 26 months, the formula would take on new life, resulting in a max bid of $6.73. ((($10)+(26x$10))/(200%))x(5%)=$6.75

A $6.75 vs $0.25 max CPC will significantly change one’s placement and traffic.

Adjusting bids for lifetime visitor values also applies to the retail, service, and any other market with repeat customers. Once you start doing the math around how repeat sales from customers affect your profits, you can see why airlines, hotels, and even grocery stores spend so much time, money, and energy around loyalty programs.

Conclusion

Marketing is not just about the acquisition of a onetime customer. It is also about building a relationship with your customers. It is time to stop advertising and start marketing.

There are a number of metrics that are very important to understand in making your business profitable. Some of them can be very difficult to measure and ascertain. However, the long term payoff from learning how to calculate these numbers goes far beyond just changing your max bids.

They are also business metrics or KPIs (key performance indicators). When digging into these numbers, you can often uncover hidden areas of profit or loss that can give you direction in how to improve your business.

If you currently don’t know if you have repeat customers, it’s time to start aggregating the data. Understanding your customer, his habits, and her tendencies can significantly improve your conversion rates, lifetime visitor values, and ultimately lead your company to be more profitable.

Brad Geddes is the Director of Search for LocalLaunch, a blogger at eWhisper.net, and a frequent conference speaker. The Paid Search column appears Mondays at Search Engine Land.

Opinions expressed in the article are those of the author, and not necessarily Search Engine Land.

Read More......

5 easy ways to raise your Alexa ranking

wo advantages of a high Alexa traffic ranking is that you can make more money if you sell your site and you can charge more for advertising.
Install the Alexa toolbar

Make sure to install the Alexa toolbar yourself. If you use several computers install it on all of them. You might think of yourself as just one visitor and that you don’t make much difference. You should think of yourself as one of the statistical daily visitors to your site.
Say that you have 100 unique visitors to your site per day. In that case you really represent 1% of your visitors. Say that 10 percent of all normal visitors have the toolbar installed, then you represent 10% of the visitors that counts on Alexa. So by simply installing the toolbar on one computer and visit your site daily you will raise your site ranking with 10%.
Offer your visitors to download the Alexa toolbar

Add a link to the Alexa toolbar and ask people to install it. You can create a custom version for free with your own logo. Raise the number of repeat visitors who have the Alexa toolbar installed and you will raise your ranking. Here is really the same example as above put in another way. If 10% your visitors have the toolbar and you have 100 visitors a day. Get one more visitors with the toolbar per day and you will again raise your ranking with 10%.
Use the amazon.com affiliate program

Alexa is owned by amazon.com and they most likely will use referrals in their statistics. How this counts is a bit shady since they don’t spell it straight out. Get more people to visit amazon.com and your Alexa ranking will most likely go up.
Link your domains

If you spread your site over more than one domain each time a visitor goes from one domain to another they will be counted as a new visitor. For example if you have a forum for your site you can put that on another domain. If you have a signup form put that on another domain and redirect visitors back to your domain after they signup.

Read More......

Technorati Tags And Authority - Increase Your Blogs Traffic

What Is Technorati?

Technorati is an internet search engine for blogs with a dynamic ranking system.

How does Technorati work?
Technorati aggregates RSS (Really simple syndication) feeds in real time from the internet and sorts the posts it finds into categories using Technorati tags. After sorting posts into their proper categories Technorati offers visitors the ability to further sort the posts by freshness, by topic and by authority.

What are Technorati Tags?
Technorati tags are labels that writers assign to articles in order to tell Technorati what the article is about. Wordpress blogs uses plugins to make this task simple however wordpress version 2.5 has rendered most plugins obsolete. Technorati is now able to use an article’s category name as its tag.

What is Technorati authority?
Technorati authority is a system in wich points are assigned to blogs based on the number of other blogs that link to it. When another blog links to yours you gain one authority point however subsequent links from the same blog do not give you additional points. Technorati ignores points older than 6 months.
Step two - Setting up Technorati the right way
How do I get started with Technorati?
To add blog posts to Technorati visit [1] www.technorati.com and create a free account. Once your account is set up you will be prompted to “claim your blog”. Follow the instructions provided by Technorati and claim your blog. If you’re running version 2.5 or later of Wordpress you don’t need to do anything else.

My wordpress blog isn’t upgraded to version 2.5 yet, is that ok?
No, it’s not. Technorati announced on April 16th, 2008 that they will no longer index posts from outdated blog versions because of security issues.

How can I raise my Technorati rank?
Step 3 below will contain advanced optimization methods however there is one very basic fundamental you should consider. Proper use of Categories and Tags. As mentioned above Technorati can read category names and use those as tags so it stands to reason that you should chose your category names wisely.

The [2] Technorati Top 100 page should give you a good starting point in deciding what to name your categories (or which tags to use if you prefer old fashioned tagging). The “top tags” section tells you what the most used tag names are. Likewise the “most searched” for tags tell you what people are actually looking for. Your goal is to get your articles in front of potential readers so browse through those lists to find good candidates to represent your articles.

Ack, nobody even searches for the categories I have on my blog!

You’d be very surprised at how many articles are improperly categorized on blogs. Some category names are extremely creative but unfortunately nobody actually searches for those creative names - and so the articles are not seen. Adding articles to multiple categories or using too many tags will increase the chances of a visitor being unhappy with the article. Chose wisely and make sure the tag or category name represents exactly what the article is about.
Step 3 - Strategies to help you get the most out of Technorati

Strategy #1 - Increase exposure time.
The most popular categories on Technorati change rapidly. If, for example, you write an article about technology and you tag it as such it will show up on Technorati BUT it will stay on the first page of technology results for a VERY short amount of time, often less than 20 minutes. A more specific tag, such as internet technology might be both more accurate and less popular resulting in a longer stay on the first page of search results and in more targetted visitors.

Strategy #2 - Let SEO and Google/Yahoo/MSN/AOL etc raise your authority.
Articles with first page results in search engines are often used as source material on other sites. Many times an author will link to the source of an article which in turn provides you with an authority vote. The latest version of wordpress has the option of letting category names be the Technorati tags. Go a little further and add additional tags when writing your articles, those extra tags are now converted into keyword meta tags for that article. REMOVE the php the_tags… line of code from single-post.php in your theme so that tags do not show up on any article page, they are redundant in this scenario and hurt SEO.

Strategy #3 - Time your article releases.
If you only write a handfull of articles per week you can time their release by using the wordpress Post Timestamp feature. If you set it to a date in the future your article won’t be published until then. The idea behind timed release of articles is to have them show on Technorati when the most people are likely to see it. That doesn’t neccessarily mean during peak hours either, depending on the topic and timezone. Play with this until you get results you’re happy with.

Strategy #4 - USE Technorati to find other users who enjoy the same subjects.
If you browse through your favorite categories on Technorati and spot other users who post to that category and have a little authority consider using some of their best articles as reference examples in your own articles. That helps visitors find more related content, it helps raise the authority of the blog you linked to in reference and it increases your chance of having that person/blog return the favor. Make friends with similar interests, favorite their blogs.

Strategy #5 - Technorati is not an island, get active!
Being active and interacting with other sites can help you raise your Technorati authority in subtle ways. A guest post for example can provide you with a link to your site which becomes an authority point. Likewise, leaving great comments on peoples blog posts increases the chance they will notice you and talk about you. Comments won’t increase authority but plugins like [3] top commentators that give you a link will raise authority if you’re on the list when the person posts their next article. Create buzz and the net will notice you, so will Technorati.

These are just five of the strategies you can put to use on top of promoting your site with Technorati tools, please take a minute to share other strategies you know about and have tried with good results in the comments below.

Read More......

Monday, February 9, 2009

All about Valentine's Day


Valentine's Day History

There are varying opinions as to the origin of Valentine's Day. Some experts state that it originated from St. Valentine, a Roman who was martyred for refusing to give up Christianity. He died on February 14, 269 A.D., the same day that had been devoted to love lotteries. Legend also says that St. Valentine left a farewell note for the jailer's daughter, who had become his friend, and signed it "From Your Valentine". Other aspects of the story say that Saint Valentine served as a priest at the temple during the reign of Emperor Claudius. Claudius then had Valentine jailed for defying him. In 496 A.D. Pope Gelasius set aside February 14 to honour St. Valentine.

Gradually, February 14 became the date for exchanging love messages and St. Valentine became the patron saint of lovers. The date was marked by sending poems and simple gifts such as flowers. There was often a social gathering or a ball.

In the United States, Miss Esther Howland is given credit for sending the first valentine cards. Commercial valentines were introduced in the 1800's and now the date is very commercialised. The town of Loveland, Colorado, does a large post office business around February 14. The spirit of good continues as valentines are sent out with sentimental verses and children exchange valentine cards at school.

The History of Saint Valentine's Day

Valentine's Day started in the time of the Roman Empire. In ancient Rome, February 14th was a holiday to honour Juno. Juno was the Queen of the Roman Gods and Goddesses. The Romans also knew her as the Goddess of women and marriage. The following day, February 15th, began the Feast of Lupercalia.


The lives of young boys and girls were strictly separate. However, one of the customs of the young people was name drawing. On the eve of the festival of Lupercalia the names of Roman girls were written on slips of paper and placed into jars. Each young man would draw a girl's name from the jar and would then be partners for the duration of the festival with the girl whom he chose. Sometimes the pairing of the children lasted an entire year, and often, they would fall in love and would later marry.


Under the rule of Emperor Claudius II Rome was involved in many bloody and unpopular campaigns. Claudius the Cruel was having a difficult time getting soldiers to join his military leagues. He believed that the reason was that roman men did not want to leave their loves or families. As a result, Claudius cancelled all marriages and engagements in Rome. The good Saint Valentine was a priest at Rome in the days of Claudius II. He and Saint Marius aided the Christian martyrs and secretly married couples, and for this kind deed Saint Valentine was apprehended and dragged before the Prefect of Rome, who condemned him to be beaten to death with clubs and to have his head cut off. He suffered martyrdom on the 14th day of February, about the year 270. At that time it was the custom in Rome, a very ancient custom, indeed, to celebrate in the month of February the Lupercalia, feasts in honour of a heathen god. On these occasions, amidst a variety of pagan ceremonies, the names of young women were placed in a box, from which they were drawn by the men as chance directed.


The pastors of the early Christian Church in Rome endeavoured to do away with the pagan element in these feasts by substituting the names of saints for those of maidens. And as the Lupercalia began about the middle of February, the pastors appear to have chosen Saint Valentine's Day for the celebration of this new feaSt. So it seems that the custom of young men choosing maidens for valentines, or saints as patrons for the coming year arose in this way.

St. Valentine's Story

Let me introduce myself. My name is Valentine. I lived in Rome during the third century. That was long, long ago! At that time, Rome was ruled by an emperor named Claudius. I didn't like Emperor Claudius, and I wasn't the only one! A lot of people shared my feelings.

Claudius wanted to have a big army. He expected men to volunteer to join. Many men just did not want to fight in wars. They did not want to leave their wives and families. As you might have guessed, not many men signed up. This made Claudius furious. So what happened? He had a crazy idea. He thought that if men were not married, they would not mind joining the army. So Claudius decided not to allow any more marriages. Young people thought his new law was cruel. I thought it was preposterous! I certainly wasn't going to support that law!

Did I mention that I was a priest? One of my favorite activities was to marry couples. Even after Emperor Claudius passed his law, I kept on performing marriage ceremonies -- secretly, of course. It was really quite exciting. Imagine a small candlelit room with only the bride and groom and myself. We would whisper the words of the ceremony, listening all the while for the steps of soldiers.

One night, we did hear footsteps. It was scary! Thank goodness the couple I was marrying escaped in time. I was caught. (Not quite as light on my feet as I used to be, I guess.) I was thrown in jail and told that my punishment was death.

I tried to stay cheerful. And do you know what? Wonderful things happened. Many young people came to the jail to visit me. They threw flowers and notes up to my window. They wanted me to know that they, too, believed in love.

One of these young people was the daughter of the prison guard. Her father allowed her to visit me in the cell. Sometimes we would sit and talk for hours. She helped me to keep my spirits up. She agreed that I did the right thing by ignoring the Emperor and going ahead with the secret marriages. On the day I was to die, I left my friend a little note thanking her for her friendship and loyalty. I signed it, "Love from your Valentine."

I believe that note started the custom of exchanging love messages on Valentine's Day. It was written on the day I died, February 14, 269 A.D. Now, every year on this day, people remember. But most importantly, they think about love and friendship. And when they think of Emperor Claudius, they remember how he tried to stand in the way of love, and they laugh -- because they know that love can't be beaten!

Valentine Traditions

Hundreds of years ago in England, many children dressed up as adults on Valentine's Day. They went singing from home to home. One verse they sang was:

Good morning to you, valentine;
Curl your locks as I do mine ---
Two before and three behind.
Good morning to you, valentine.

In Wales wooden love spoons were carved and given as gifts on February 14th. Hearts, keys and keyholes were favourite decorations on the spoons. The decoration meant, "You unlock my heart!"

In the Middle Ages, young men and women drew names from a bowl to see who their valentines would be. They would wear these names on their sleeves for one week. To wear your heart on your sleeve now means that it is easy for other people to know how you are feeling.

In some countries, a young woman may receive a gift of clothing from a young man. If she keeps the gift, it means she will marry him.

Some people used to believe that if a woman saw a robin flying overhead on Valentine's Day, it meant she would marry a sailor. If she saw a sparrow, she would marry a poor man and be very happy. If she saw a goldfinch, she would marry a millionaire.

A love seat is a wide chair. It was first made to seat one woman and her wide dress. Later, the love seat or courting seat had two sections, often in an S-shape. In this way, a couple could sit together -- but not too closely!

Think of five or six names of boys or girls you might marry, As you twist the stem of an apple, recite the names until the stem comes off. You will marry the person whose name you were saying when the stem fell off.

Pick a dandelion that has gone to seed. Take a deep breath and blow the seeds into the wind. Count the seeds that remain on the stem. That is the number of children you will have.

If you cut an apple in half and count how many seeds are inside, you will also know how many children you will have.

Read More......

What Is Web 2.0.

Design Patterns and Business Models for the Next Generation of Software
The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. Many people concluded that the web was overhyped, when in fact bubbles and consequent shakeouts appear to be a common feature of all technological revolutions. Shakeouts typically mark the point at which an ascendant technology is ready to take its place at center stage. The pretenders are given the bum's rush, the real success stories show their strength, and there begins to be an understanding of what separates one from the other.
The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.
In the year and a half since, the term "Web 2.0" has clearly taken hold, with more than 9.5 million citations in Google. But there's still a huge amount of disagreement about just what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom.
This article is an attempt to clarify just what we mean by Web 2.0.
In our initial brainstorming, we formulated our sense of Web 2.0 by example:
Web 1.0 Web 2.0
DoubleClick --> Google AdSense
Ofoto --> Flickr
Akamai --> BitTorrent
mp3.com --> Napster
Britannica Online --> Wikipedia
personal websites --> blogging
evite --> upcoming.org and EVDB
domain name speculation --> search engine optimization
page views --> cost per click
screen scraping --> web services
publishing --> participation
content management systems --> wikis
directories (taxonomy) --> tagging ("folksonomy")
stickiness --> syndication
The list went on and on. But what was it that made us identify one application or approach as "Web 1.0" and another as "Web 2.0"? (The question is particularly urgent because the Web 2.0 meme has become so widespread that companies are now pasting it on as a marketing buzzword, with no real understanding of just what it means. The question is particularly difficult because many of those buzzword-addicted startups are definitely not Web 2.0, while some of the applications we identified as Web 2.0, like Napster and BitTorrent, are not even properly web applications!) We began trying to tease out the principles that are demonstrated in one way or another by the success stories of web 1.0 and by the most interesting of the new applications.
1. The Web As Platform





Like many important concepts, Web 2.0 doesn't have a hard boundary, but rather, a gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.

Figure 1 shows a "meme map" of Web 2.0 that was developed at a brainstorming session during FOO Camp, a conference at O'Reilly Media. It's very much a work in progress, but shows the many ideas that radiate out from the Web 2.0 core.
For example, at the first Web 2.0 conference, in October 2004, John Battelle and I listed a preliminary set of principles in our opening talk. The first of those principles was "The web as platform." Yet that was also a rallying cry of Web 1.0 darling Netscape, which went down in flames after a heated battle with Microsoft. What's more, two of our initial Web 1.0 exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform. People don't often think of it as "web services", but in fact, ad serving was the first widely deployed web service, and the first widely deployed "mashup" (to use another term that has gained currency of late). Every banner ad is served as a seamless cooperation between two websites, delivering an integrated page to a reader on yet another computer. Akamai also treats the network as the platform, and at a deeper level of the stack, building a transparent caching and content delivery network that eases bandwidth congestion.
Nonetheless, these pioneers provided useful contrasts because later entrants have taken their solution to the same problem even further, understanding something deeper about the nature of the new platform. Both DoubleClick and Akamai were Web 2.0 pioneers, yet we can also see how it's possible to realize more of the possibilities by embracing additional Web 2.0 design patterns.
Let's drill down for a moment into each of these three cases, teasing out some of the essential elements of difference.
Netscape vs. Google
If Netscape was the standard bearer for Web 1.0, Google is most certainly the standard bearer for Web 2.0, if only because their respective IPOs were defining events for each era. So let's start with a comparison of these two companies and their positioning.
Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the "horseless carriage" framed the automobile as an extension of the familiar, Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.
In the end, both web browsers and web servers turned out to be commodities, and value moved "up the stack" to services delivered over the web platform.
Google, by contrast, began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly, for the use of that service. None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement. No licensing or sale, just usage. No porting to different platforms so that customers can run the software on their own equipment, just a massively scalable collection of commodity PCs running open source operating systems plus homegrown applications and utilities that no one outside the company ever gets to see.
At bottom, Google requires a competency that Netscape never needed: database management. Google isn't just a collection of software tools, it's a specialized database. Without the data, the tools are useless; without the software, the data is unmanageable. Software licensing and control over APIs--the lever of power in the previous era--is irrelevant because the software never need be distributed but only performed, and also because without the ability to collect and manage the data, the software is of little use. In fact, the value of the software is proportional to the scale and dynamism of the data it helps to manage.
Google's service is not a server--though it is delivered by a massive collection of internet servers--nor a browser--though it is experienced by the user within the browser. Nor does its flagship search service even host the content that it enables users to find. Much like a phone call, which happens not just on the phones at either end of the call, but on the network in between, Google happens in the space between browser and search engine and destination content server, as an enabler or middleman between the user and his or her online experience.
While both Netscape and Google could be described as software companies, it's clear that Netscape belonged to the same software world as Lotus, Microsoft, Oracle, SAP, and other companies that got their start in the 1980's software revolution, while Google's fellows are other internet applications like eBay, Amazon, Napster, and yes, DoubleClick and Akamai.

DoubleClick vs. Overture and AdSense
Like Google, DoubleClick is a true child of the internet era. It harnesses software as a service, has a core competency in data management, and, as noted above, was a pioneer in web services long before web services even had a name. However, DoubleClick was ultimately limited by its business model. It bought into the '90s notion that the web was about publishing, not participation; that advertisers, not consumers, ought to call the shots; that size mattered, and that the internet was increasingly being dominated by the top websites as measured by MediaMetrix and other web ad scoring companies.
As a result, DoubleClick proudly cites on its website "over 2000 successful implementations" of its software. Yahoo! Search Marketing (formerly Overture) and Google AdSense, by contrast, already serve hundreds of thousands of advertisers apiece.
Overture and Google's success came from an understanding of what Chris Anderson refers to as "the long tail," the collective power of the small sites that make up the bulk of the web's content. DoubleClick's offerings require a formal sales contract, limiting their market to the few thousand largest websites. Overture and Google figured out how to enable ad placement on virtually any web page. What's more, they eschewed publisher/ad-agency friendly advertising formats such as banner ads and popups in favor of minimally intrusive, context-sensitive, consumer-friendly text advertising.
The Web 2.0 lesson: leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
Not surprisingly, other web 2.0 success stories demonstrate this same behavior. eBay enables occasional transactions of only a few dollars between single individuals, acting as an automated intermediary. Napster (though shut down for legal reasons) built its network not by building a centralized song database, but by architecting a system in such a way that every downloader also became a server, and thus grew the network.
Akamai vs. BitTorrent
Like DoubleClick, Akamai is optimized to do business with the head, not the tail, with the center, not the edges. While it serves the benefit of the individuals at the edge of the web by smoothing their access to the high-demand sites at the center, it collects its revenue from those central sites.
BitTorrent, like other pioneers in the P2P movement, takes a radical approach to internet decentralization. Every client is also a server; files are broken up into fragments that can be served from multiple locations, transparently harnessing the network of downloaders to provide both bandwidth and data to other users. The more popular the file, in fact, the faster it can be served, as there are more users providing bandwidth and fragments of the complete file.
BitTorrent thus demonstrates a key Web 2.0 principle: the service automatically gets better the more people use it. While Akamai must add servers to improve service, every BitTorrent consumer brings his own resources to the party. There's an implicit "architecture of participation", a built-in ethic of cooperation, in which the service acts primarily as an intelligent broker, connecting the edges to each other and harnessing the power of the users themselves.
2. Harnessing Collective Intelligence
The central principle behind the success of the giants born in the Web 1.0 era who have survived to lead the Web 2.0 era appears to be this, that they have embraced the power of the web to harness collective intelligence:
• Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.
• Yahoo!, the first great internet success story, was born as a catalog, or directory of links, an aggregation of the best work of thousands, then millions of web users. While Yahoo! has since moved into the business of creating many types of content, its role as a portal to the collective work of the net's users remains the core of its value.
• Google's breakthrough in search, which quickly made it the undisputed search market leader, was PageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.
• eBay's product is the collective activity of all its users; like the web itself, eBay grows organically in response to user activity, and the company's role is as an enabler of a context in which that user activity can happen. What's more, eBay's competitive advantage comes almost entirely from the critical mass of buyers and sellers, which makes any new entrant offering similar services significantly less attractive.
• Amazon sells the same products as competitors such as Barnesandnoble.com, and they receive the same product descriptions, cover images, and editorial content from their vendors. But Amazon has made a science of user engagement. They have an order of magnitude more user reviews, invitations to participate in varied ways on virtually every page--and even more importantly, they use user activity to produce better search results. While a Barnesandnoble.com search is likely to lead with the company's own products, or sponsored results, Amazon always leads with "most popular", a real-time computation based not only on sales but other factors that Amazon insiders call the "flow" around products. With an order of magnitude more user participation, it's no surprise that Amazon's sales also outpace competitors.
Now, innovative companies that pick up on this insight and perhaps extend it even further, are making their mark on the web:
• Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond's dictum (originally coined in the context of open source software) that "with enough eyeballs, all bugs are shallow," to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
• Sites like del.icio.us and Flickr, two companies that have received a great deal of attention of late, have pioneered a concept that some people call "folksonomy" (in contrast to taxonomy), a style of collaborative categorization of sites using freely chosen keywords, often referred to as tags. Tagging allows for the kind of multiple, overlapping associations that the brain itself uses, rather than rigid categories. In the canonical example, a Flickr photo of a puppy might be tagged both "puppy" and "cute"--allowing for retrieval along natural axes generated user activity.
• Collaborative spam filtering products like Cloudmark aggregate the individual decisions of email users about what is and is not spam, outperforming systems that rely on analysis of the messages themselves.
• It is a truism that the greatest internet success stories don't advertise their products. Their adoption is driven by "viral marketing"--that is, recommendations propagating directly from one user to another. You can almost make the case that if a site or product relies on advertising to get the word out, it isn't Web 2.0.
• Even much of the infrastructure of the web--including the Linux, Apache, MySQL, and Perl, PHP, or Python code involved in most web servers--relies on the peer-production methods of open source, in themselves an instance of collective, net-enabled intelligence. There are more than 100,000 open source software projects listed on SourceForge.net. Anyone can add a project, anyone can download and use the code, and new projects migrate from the edges to the center as a result of users putting them to work, an organic software adoption process relying almost entirely on viral marketing.
The lesson: Network effects from user contributions are the key to market dominance in the Web 2.0 era.

Blogging and the Wisdom of Crowds
One of the most highly touted features of the Web 2.0 era is the rise of blogging. Personal home pages have been around since the early days of the web, and the personal diary and daily opinion column around much longer than that, so just what is the fuss all about?
At its most basic, a blog is just a personal home page in diary format. But as Rich Skrenta notes, the chronological organization of a blog "seems like a trivial difference, but it drives an entirely different delivery, advertising and value chain."
One of the things that has made a difference is a technology called RSS. RSS is the most significant advance in the fundamental architecture of the web since early hackers realized that CGI could be used to create database-backed websites. RSS allows someone to link not just to a page, but to subscribe to it, with notification every time that page changes. Skrenta calls this "the incremental web." Others call it the "live web".
Now, of course, "dynamic websites" (i.e., database-backed sites with dynamically generated content) replaced static web pages well over ten years ago. What's dynamic about the live web are not just the pages, but the links. A link to a weblog is expected to point to a perennially changing page, with "permalinks" for any individual entry, and notification for each change. An RSS feed is thus a much stronger link than, say a bookmark or a link to a single page.
RSS also means that the web browser is not the only means of viewing a web page. While some RSS aggregators, such as Bloglines, are web-based, others are desktop clients, and still others allow users of portable devices to subscribe to constantly updated content.
RSS is now being used to push not just notices of new blog entries, but also all kinds of data updates, including stock quotes, weather data, and photo availability. This use is actually a return to one of its roots: RSS was born in 1997 out of the confluence of Dave Winer's "Really Simple Syndication" technology, used to push out blog updates, and Netscape's "Rich Site Summary", which allowed users to create custom Netscape home pages with regularly updated data flows. Netscape lost interest, and the technology was carried forward by blogging pioneer Userland, Winer's company. In the current crop of applications, we see, though, the heritage of both parents.
But RSS is only part of what makes a weblog different from an ordinary web page. Tom Coates remarks on the significance of the permalink:
It may seem like a trivial piece of functionality now, but it was effectively the device that turned weblogs from an ease-of-publishing phenomenon into a conversational mess of overlapping communities. For the first time it became relatively easy to gesture directly at a highly specific post on someone else's site and talk about it. Discussion emerged. Chat emerged. And - as a result - friendships emerged or became more entrenched. The permalink was the first - and most successful - attempt to build bridges between weblogs.
In many ways, the combination of RSS and permalinks adds many of the features of NNTP, the Network News Protocol of the Usenet, onto HTTP, the web protocol. The "blogosphere" can be thought of as a new, peer-to-peer equivalent to Usenet and bulletin-boards, the conversational watering holes of the early internet. Not only can people subscribe to each others' sites, and easily link to individual comments on a page, but also, via a mechanism known as trackbacks, they can see when anyone else links to their pages, and can respond, either with reciprocal links, or by adding comments.
Interestingly, two-way links were the goal of early hypertext systems like Xanadu. Hypertext purists have celebrated trackbacks as a step towards two way links. But note that trackbacks are not properly two-way--rather, they are really (potentially) symmetrical one-way links that create the effect of two way links. The difference may seem subtle, but in practice it is enormous. Social networking systems like Friendster, Orkut, and LinkedIn, which require acknowledgment by the recipient in order to establish a connection, lack the same scalability as the web. As noted by Caterina Fake, co-founder of the Flickr photo sharing service, attention is only coincidentally reciprocal. (Flickr thus allows users to set watch lists--any user can subscribe to any other user's photostream via RSS. The object of attention is notified, but does not have to approve the connection.)
If an essential part of Web 2.0 is harnessing collective intelligence, turning the web into a kind of global brain, the blogosphere is the equivalent of constant mental chatter in the forebrain, the voice we hear in all of our heads. It may not reflect the deep structure of the brain, which is often unconscious, but is instead the equivalent of conscious thought. And as a reflection of conscious thought and attention, the blogosphere has begun to have a powerful effect.
First, because search engines use link structure to help predict useful pages, bloggers, as the most prolific and timely linkers, have a disproportionate role in shaping search engine results. Second, because the blogging community is so highly self-referential, bloggers paying attention to other bloggers magnifies their visibility and power. The "echo chamber" that critics decry is also an amplifier.
If it were merely an amplifier, blogging would be uninteresting. But like Wikipedia, blogging harnesses collective intelligence as a kind of filter. What James Suriowecki calls "the wisdom of crowds" comes into play, and much as PageRank produces better results than analysis of any individual document, the collective attention of the blogosphere selects for value.
While mainstream media may see individual blogs as competitors, what is really unnerving is that the competition is with the blogosphere as a whole. This is not just a competition between sites, but a competition between business models. The world of Web 2.0 is also the world of what Dan Gillmor calls "we, the media," a world in which "the former audience", not a few people in a back room, decides what's important.
3. Data is the Next Intel Inside
Every significant internet application to date has been backed by a specialized database: Google's web crawl, Yahoo!'s directory (and web crawl), Amazon's database of products, eBay's database of products and sellers, MapQuest's map databases, Napster's distributed song database. As Hal Varian remarked in a personal conversation last year, "SQL is the new HTML." Database management is a core competency of Web 2.0 companies, so much so that we have sometimes referred to these applications as "infoware" rather than merely software.
This fact leads to a key question: Who owns the data?
In the internet era, one can already see a number of cases where control over the database has led to market control and outsized financial returns. The monopoly on domain name registry initially granted by government fiat to Network Solutions (later purchased by Verisign) was one of the first great moneymakers of the internet. While we've argued that business advantage via controlling software APIs is much more difficult in the age of the internet, control of key data sources is not, especially if those data sources are expensive to create or amenable to increasing returns via network effects.
Look at the copyright notices at the base of every map served by MapQuest, maps.yahoo.com, maps.msn.com, or maps.google.com, and you'll see the line "Maps copyright NavTeq, TeleAtlas," or with the new satellite imagery services, "Images copyright Digital Globe." These companies made substantial investments in their databases (NavTeq alone reportedly invested $750 million to build their database of street addresses and directions. Digital Globe spent $500 million to launch their own satellite to improve on government-supplied imagery.) NavTeq has gone so far as to imitate Intel's familiar Intel Inside logo: Cars with navigation systems bear the imprint, "NavTeq Onboard." Data is indeed the Intel Inside of these applications, a sole source component in systems whose software infrastructure is largely open source or otherwise commodified.
The now hotly contested web mapping arena demonstrates how a failure to understand the importance of owning an application's core data will eventually undercut its competitive position. MapQuest pioneered the web mapping category in 1995, yet when Yahoo!, and then Microsoft, and most recently Google, decided to enter the market, they were easily able to offer a competing application simply by licensing the same data.
Contrast, however, the position of Amazon.com. Like competitors such as Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon "embraced and extended" their data suppliers.
Imagine if MapQuest had done the same thing, harnessing their users to annotate maps and directions, adding layers of value. It would have been much more difficult for competitors to enter the market just by licensing the base data.
The recent introduction of Google Maps provides a living laboratory for the competition between application vendors and their data suppliers. Google's lightweight programming model has led to the creation of numerous value-added services in the form of mashups that link Google Maps with other internet-accessible data sources. Paul Rademacher's housingmaps.com, which combines Google Maps with Craigslist apartment rental and home purchase data to create an interactive housing search tool, is the pre-eminent example of such a mashup.
At present, these mashups are mostly innovative experiments, done by hackers. But entrepreneurial activity follows close behind. And already, one can see that for at least one class of developer, Google has taken the role of data source away from Navteq and inserted themselves as a favored intermediary. We expect to see battles between data suppliers and application vendors in the next few years, as both realize just how important certain classes of data will become as building blocks for Web 2.0 applications.
The race is on to own certain classes of core data: location, identity, calendaring of public events, product identifiers and namespaces. In many cases, where there is significant cost to create the data, there may be an opportunity for an Intel Inside style play, with a single source for the data. In others, the winner will be the company that first reaches critical mass via user aggregation, and turns that aggregated data into a system service.
For example, in the area of identity, PayPal, Amazon's 1-click, and the millions of users of communications systems, may all be legitimate contenders to build a network-wide identity database. (In this regard, Google's recent attempt to use cell phone numbers as an identifier for Gmail accounts may be a step towards embracing and extending the phone system.) Meanwhile, startups like Sxip are exploring the potential of federated identity, in quest of a kind of "distributed 1-click" that will provide a seamless Web 2.0 identity subsystem. In the area of calendaring, EVDB is an attempt to build the world's largest shared calendar via a wiki-style architecture of participation. While the jury's still out on the success of any particular startup or approach, it's clear that standards and solutions in these areas, effectively turning certain classes of data into reliable subsystems of the "internet operating system", will enable the next generation of applications.
A further point must be noted with regard to data, and that is user concerns about privacy and their rights to their own data. In many of the early web applications, copyright is only loosely enforced. For example, Amazon lays claim to any reviews submitted to the site, but in the absence of enforcement, people may repost the same review elsewhere. However, as companies begin to realize that control over data may be their chief source of competitive advantage, we may see heightened attempts at control.
Much as the rise of proprietary software led to the Free Software movement, we expect the rise of proprietary databases to result in a Free Data movement within the next decade. One can see early signs of this countervailing trend in open data projects such as Wikipedia, the Creative Commons, and in software projects like Greasemonkey, which allow users to take control of how data is displayed on their computer.

4. End of the Software Release Cycle
As noted above in the discussion of Google vs. Netscape, one of the defining characteristics of internet era software is that it is delivered as a service, not as a product. This fact leads to a number of fundamental changes in the business model of such a company:
1. Operations must become a core competency. Google's or Yahoo!'s expertise in product development must be matched by an expertise in daily operations. So fundamental is the shift from software as artifact to software as service that the software will cease to perform unless it is maintained on a daily basis. Google must continuously crawl the web and update its indices, continuously filter out link spam and other attempts to influence its results, continuously and dynamically respond to hundreds of millions of asynchronous user queries, simultaneously matching them with context-appropriate advertisements.
It's no accident that Google's system administration, networking, and load balancing techniques are perhaps even more closely guarded secrets than their search algorithms. Google's success at automating these processes is a key part of their cost advantage over competitors.
It's also no accident that scripting languages such as Perl, Python, PHP, and now Ruby, play such a large role at web 2.0 companies. Perl was famously described by Hassan Schroeder, Sun's first webmaster, as "the duct tape of the internet." Dynamic languages (often called scripting languages and looked down on by the software engineers of the era of software artifacts) are the tool of choice for system and network administrators, as well as application developers building dynamic systems that require constant change.
2. Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, "release early and release often" in fact has morphed into an even more radical position, "the perpetual beta," in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a "Beta" logo for years at a time.
Real time monitoring of user behavior to see just which new features are used, and how they are used, thus becomes another required core competency. A web developer at a major online service remarked: "We put up two or three new features on some part of the site every day, and if users don't adopt them, we take them down. If they like them, we roll them out to the entire site."
Cal Henderson, the lead developer of Flickr, recently revealed that they deploy new builds up to every half hour. This is clearly a radically different development model! While not all web applications are developed in as extreme a style as Flickr, almost all web applications have a development cycle that is radically unlike anything from the PC or client-server era. It is for this reason that a recent ZDnet editorial concluded that Microsoft won't be able to beat Google: "Microsoft's business model depends on everyone upgrading their computing environment every two to three years. Google's depends on everyone exploring what's new in their computing environment every day."
While Microsoft has demonstrated enormous ability to learn from and ultimately best its competition, there's no question that this time, the competition will require Microsoft (and by extension, every other existing software company) to become a deeply different kind of company. Native Web 2.0 companies enjoy a natural advantage, as they don't have old patterns (and corresponding business models and revenue sources) to shed.


5. Lightweight Programming Models
Once the idea of web services became au courant, large companies jumped into the fray with a complex web services stack designed to create highly reliable programming environments for distributed applications.
But much as the web succeeded precisely because it overthrew much of hypertext theory, substituting a simple pragmatism for ideal design, RSS has become perhaps the single most widely deployed web service because of its simplicity, while the complex corporate web services stacks have yet to achieve wide deployment.
Similarly, Amazon.com's web services are provided in two forms: one adhering to the formalisms of the SOAP (Simple Object Access Protocol) web services stack, the other simply providing XML data over HTTP, in a lightweight approach sometimes referred to as REST (Representational State Transfer). While high value B2B connections (like those between Amazon and retail partners like ToysRUs) use the SOAP stack, Amazon reports that 95% of the usage is of the lightweight REST service.
This same quest for simplicity can be seen in other "organic" web services. Google's recent release of Google Maps is a case in point. Google Maps' simple AJAX (Javascript and XML) interface was quickly decrypted by hackers, who then proceeded to remix the data into new services.
Mapping-related web services had been available for some time from GIS vendors such as ESRI as well as from MapQuest and Microsoft MapPoint. But Google Maps set the world on fire because of its simplicity. While experimenting with any of the formal vendor-supported web services required a formal contract between the parties, the way Google Maps was implemented left the data for the taking, and hackers soon found ways to creatively re-use that data.
There are several significant lessons here:
1. Support lightweight programming models that allow for loosely coupled systems. The complexity of the corporate-sponsored web services stack is designed to enable tight coupling. While this is necessary in many cases, many of the most interesting applications can indeed remain loosely coupled, and even fragile. The Web 2.0 mindset is very different from the traditional IT mindset!
2. Think syndication, not coordination. Simple web services, like RSS and REST-based web services, are about syndicating data outwards, not controlling what happens when it gets to the other end of the connection. This idea is fundamental to the internet itself, a reflection of what is known as the end-to-end principle.
3. Design for "hackability" and remixability. Systems like the original web, RSS, and AJAX all have this in common: the barriers to re-use are extremely low. Much of the useful software is actually open source, but even when it isn't, there is little in the way of intellectual property protection. The web browser's "View Source" option made it possible for any user to copy any other user's web page; RSS was designed to empower the user to view the content he or she wants, when it's wanted, not at the behest of the information provider; the most successful web services are those that have been easiest to take in new directions unimagined by their creators. The phrase "some rights reserved," which was popularized by the Creative Commons to contrast with the more typical "all rights reserved," is a useful guidepost.
Innovation in Assembly
Lightweight business models are a natural concomitant of lightweight programming and lightweight connections. The Web 2.0 mindset is good at re-use. A new service like housingmaps.com was built simply by snapping together two existing services. Housingmaps.com doesn't have a business model (yet)--but for many small-scale services, Google AdSense (or perhaps Amazon associates fees, or both) provides the snap-in equivalent of a revenue model.
These examples provide an insight into another key web 2.0 principle, which we call "innovation in assembly." When commodity components are abundant, you can create value simply by assembling them in novel or effective ways. Much as the PC revolution provided many opportunities for innovation in assembly of commodity hardware, with companies like Dell making a science out of such assembly, thereby defeating companies whose business model required innovation in product development, we believe that Web 2.0 will provide opportunities for companies to beat the competition by getting better at harnessing and integrating services provided by others.
6. Software Above the Level of a Single Device
One other feature of Web 2.0 that deserves mention is the fact that it's no longer limited to the PC platform. In his parting advice to Microsoft, long time Microsoft developer Dave Stutz pointed out that "Useful software written above the level of the single device will command high margins for a long time to come."
Of course, any web application can be seen as software above the level of a single device. After all, even the simplest web application involves at least two computers: the one hosting the web server and the one hosting the browser. And as we've discussed, the development of the web as platform extends this idea to synthetic applications composed of services provided by multiple computers.
But as with many areas of Web 2.0, where the "2.0-ness" is not something new, but rather a fuller realization of the true potential of the web platform, this phrase gives us a key insight into how to design applications and services for the new platform.
To date, iTunes is the best exemplar of this principle. This application seamlessly reaches from the handheld device to a massive web back-end, with the PC acting as a local cache and control station. There have been many previous attempts to bring web content to portable devices, but the iPod/iTunes combination is one of the first such applications designed from the ground up to span multiple devices. TiVo is another good example.
iTunes and TiVo also demonstrate many of the other core principles of Web 2.0. They are not web applications per se, but they leverage the power of the web platform, making it a seamless, almost invisible part of their infrastructure. Data management is most clearly the heart of their offering. They are services, not packaged applications (although in the case of iTunes, it can be used as a packaged application, managing only the user's local data.) What's more, both TiVo and iTunes show some budding use of collective intelligence, although in each case, their experiments are at war with the IP lobby's. There's only a limited architecture of participation in iTunes, though the recent addition of podcasting changes that equation substantially.
This is one of the areas of Web 2.0 where we expect to see some of the greatest change, as more and more devices are connected to the new platform. What applications become possible when our phones and our cars are not consuming data but reporting it? Real time traffic monitoring, flash mobs, and citizen journalism are only a few of the early warning signs of the capabilities of the new platform.

7. Rich User Experiences
As early as Pei Wei's Viola browser in 1992, the web was being used to deliver "applets" and other kinds of active content within the web browser. Java's introduction in 1995 was framed around the delivery of such applets. JavaScript and then DHTML were introduced as lightweight ways to provide client side programmability and richer user experiences. Several years ago, Macromedia coined the term "Rich Internet Applications" (which has also been picked up by open source Flash competitor Laszlo Systems) to highlight the capabilities of Flash to deliver not just multimedia content but also GUI-style application experiences.
However, the potential of the web to deliver full scale applications didn't hit the mainstream till Google introduced Gmail, quickly followed by Google Maps, web based applications with rich user interfaces and PC-equivalent interactivity. The collection of technologies used by Google was christened AJAX, in a seminal essay by Jesse James Garrett of web design firm Adaptive Path. He wrote:
"Ajax isn't a technology. It's really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:
• standards-based presentation using XHTML and CSS;
• dynamic display and interaction using the Document Object Model;
• data interchange and manipulation using XML and XSLT;
• asynchronous data retrieval using XMLHttpRequest;
• and JavaScript binding everything together."
AJAX is also a key component of Web 2.0 applications such as Flickr, now part of Yahoo!, 37signals' applications basecamp and backpack, as well as other Google applications such as Gmail and Orkut. We're entering an unprecedented period of user interface innovation, as web developers are finally able to build web applications as rich as local PC-based applications.
Interestingly, many of the capabilities now being explored have been around for many years. In the late '90s, both Microsoft and Netscape had a vision of the kind of capabilities that are now finally being realized, but their battle over the standards to be used made cross-browser applications difficult. It was only when Microsoft definitively won the browser wars, and there was a single de-facto browser standard to write to, that this kind of application became possible. And while Firefox has reintroduced competition to the browser market, at least so far we haven't seen the destructive competition over web standards that held back progress in the '90s.
We expect to see many new web applications over the next few years, both truly novel applications, and rich web reimplementations of PC applications. Every platform change to date has also created opportunities for a leadership change in the dominant applications of the previous platform.
Gmail has already provided some interesting innovations in email, combining the strengths of the web (accessible from anywhere, deep database competencies, searchability) with user interfaces that approach PC interfaces in usability. Meanwhile, other mail clients on the PC platform are nibbling away at the problem from the other end, adding IM and presence capabilities. How far are we from an integrated communications client combining the best of email, IM, and the cell phone, using VoIP to add voice capabilities to the rich capabilities of web applications? The race is on.
It's easy to see how Web 2.0 will also remake the address book. A Web 2.0-style address book would treat the local address book on the PC or phone merely as a cache of the contacts you've explicitly asked the system to remember. Meanwhile, a web-based synchronization agent, Gmail-style, would remember every message sent or received, every email address and every phone number used, and build social networking heuristics to decide which ones to offer up as alternatives when an answer wasn't found in the local cache. Lacking an answer there, the system would query the broader social network.
A Web 2.0 word processor would support wiki-style collaborative editing, not just standalone documents. But it would also support the rich formatting we've come to expect in PC-based word processors. Writely is a good example of such an application, although it hasn't yet gained wide traction.
Nor will the Web 2.0 revolution be limited to PC applications. Salesforce.com demonstrates how the web can be used to deliver software as a service, in enterprise scale applications such as CRM.
The competitive opportunity for new entrants is to fully embrace the potential of Web 2.0. Companies that succeed will create applications that learn from their users, using an architecture of participation to build a commanding advantage not just in the software interface, but in the richness of the shared data.
Core Competencies of Web 2.0 Companies
In exploring the seven principles above, we've highlighted some of the principal features of Web 2.0. Each of the examples we've explored demonstrates one or more of those key principles, but may miss others. Let's close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:
• Services, not packaged software, with cost-effective scalability
• Control over unique, hard-to-recreate data sources that get richer as more people use them
• Trusting users as co-developers
• Harnessing collective intelligence
• Leveraging the long tail through customer self-service
• Software above the level of a single device
• Lightweight user interfaces, development models, AND business models
The next time a company claims that it's "Web 2.0," test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.

Read More......

Web Site Problems Web Site Solution

Your web site problems are in a web site solution somewhere. They and it can be found. It is somewhere in the business development of your own home business that you can find and help others with their different needed jobs and businesses that can be very profitable for both of you.

Have you ever noticed cars at the food market or hardware store with a sign on them saying something like, we run your errands cheap? They have a web site problem that has a very reasonable web site solution. The problem is that they do not have a web site presence through business development to give themselves a solution which is marketing to get leads, sales and profit.

Did you think they were naive to run a small business like that? What if it fit them or you as good as an old glove? Take a look below my friend. The business development answer is in here nice and comfortable.

Think web site to effectively promote and make a profit. It would be a big lie to say that you will not have web site trouble or problems along the way. However if you stay at a job that rules your total life income and are at the complete mercy of others I can assure you that you will not be secure and happy without a separate income coming in.

Busy people like you with lives to match, stay at home moms who have web site problems and want a web site solution, executives with job stress diseases, frustrated employees, bored unchallenged retirees and would be entrepreneurs. These are good prospects for businesses that offer virtual personal assistance or errand services that you can run as an, at home, business. They all just need a little hand holding to get them started, up and running.

If the growing entrepreneur does not have a full college degree and does not think he or she is qualified for positions that help him or her to create a home business, they should think again. You might feel you do not have qualified people to brainstorm with. There is actually software available now to help you come up with good ideas.

Some small business possibilities you will see working in your own area are;

Errand service which can be as simple as picking up the dry cleaning, ordering the birthday cake, dropping off prescriptions at the drug store or walking and playing with the dog or cat. All these things and more are near us and within our reach daily.

The great thing about convenience errand service businesses is not only that they are a home business but that they do not need much start up capital. A good cheap way to get valuable and expensive experience for a low price.

Many young and even middle aged people were brought up, almost from the beginning of their schooling, using computers. With a minimum amount of knowledge, that will grow quickly, you can help people with zero computer skills to learn a lot of basic e-mail, folder set up and other skills to grow and avoid web site trouble.

Any jobs that let the owner do his or her business at home leaves them free from additional rental overhead, utility expenses, minimum utility increases and the daily commute. There are many local e-Bay training classes available now across the USA and Canada that are very cheap and good.

These are taught by seasoned e-Bay merchants. You can build successful web sites from on line auctions alone.

Of course, if our business person offers a pickup and delivery (as someone would need to do with an errand service) the firm would take on the additional expenses of gasoline and wear of your vehicle. If you do something like this you will dramatically change your income by having good business development.

Small businesses need a web site to get noticed.Does it involve web site problems and call for a general web site solution? Yes it does.

The owner will surely have an increase in auto insurance as well. These are some of the things to consider when weighing options between those office jobs, for a boss, and an at home business.

To financially survive in this day and age you need to have a computer plus on line knowledge that grows so you can shop around for the best prices and rates on everything. This is quicker and more efficient for you.

If the future home business owner is more highly skilled (and gets a little smarter every single day) than the typical errand services jobs would require, your home business could be as a virtual assistant. This tiny paragraph will make you far wealthier, healthier and wiser over your whole life. It is literally worth many, many thousands of dollars to you over your lifetime.

Make yourself a little better every single day. Things will go better for you when you do.

A really good web site host will have a live successful human being who is successful at business development web site problems handling and has a web site solution for you. They will answer your calls and e-mail questions about exactly what it takes to build a profitable web site. They are friendly and very good at what they are doing with their web site building plans.

Once you have skills you will want to consider getting inter net marketing experience as an affiliate with one partner program at first. You want to search far and deep to find the best training affiliate program there is. You want one that will spill their guts out and teach you everything they can.

They will have hard core, professional, enticing and interesting web sites you can latch onto (with your own identification information to get credit for sales) until you can afford to get your own. If you have office skills such as word processing, computer database and contact database management and perhaps desktop publishing you could be the ideal virtual assistant independent business entrepreneur.

If you can find a web site host who will help you develop your business development skills your growth will be very nice. So nice that I am reluctant to tell you the extent here for fear that you would not believe it and would just let it pass you by.

When you find a web host that looks good do not sign up with them until you get a guaranteed cost from them. Ask for a complete break down of all their costs and guarantee.

Ask all other site hosts you check out for a complete list of their hosting tools and hand holding services to you. If any one will not do this just forget about them.

Any that will not give you a money back guarantee you want to forget also. No exceptions. Understand? Good! Let us keep going.

The advantage of a virtual assistant as opposed to a local errand service is that the VA can work for anyone anywhere. Most of the work could be done by email, fax and ever developing mail systems.

You would not have the additional transportation costs. The jobs you will get in your at home business initial start up will be more costly up front.

This is normal. That is unless you already possess home office equipment in place like a PC, printer, scanner, fax (all in one), digital camera, flash drives, speakers and a secure and comfortable home office environment that is ready to go.

Whether you, as an entrepreneur, want to take a variety of jobs and assignments from various clients in your home business or want to take a few clients who commit for extended periods of time is your call as the entrepreneur. There are clear advantages to both.

I hope you now see the promotional advantages of having a web site presence in spite of the business development web site problems and web site solution you will surely need to seek out.

This is all part of the great scheme of things in society today. The first thing anyone will be impressed with by you is if you have a web site and can give them your business card with your email and web site address on it.

Anything less is going to paint anyone as a pure amateur. Having a web presence will put you far ahead of your competition and into the eyes of your potential customers as a mover and shaker they feel comfortable at doing busines

Read More......

Followers

Adsense Indonesia
PPP Direct

Masukkan Code ini K1-6AYB56-A
untuk berbelanja di KutuKutuBuku.com