Sunday, 20 May 2012

ETHICAL ISSUE IN NETWORKING

What are the ethical issues?

Many of the ethical issues that face IT professionals involve privacy. For example:

Should you read the private e-mail of your network users just “because you can?” Is it okay to read employees’ e-mail as a security measure, to ensure that sensitive company information isn’t being disclosed? Is it okay to read employees’ e-mail to ensure that company rules (for instance, against personal use of the e-mail system) aren’t being violated? If you do read employees’ e-mail, should you disclose that policy to them? Before or after the fact?
Is it okay to monitor the Web sites visited by your network users? Should you routinely keep logs of visited sites? Is it negligent to not monitor such Internet usage, to prevent the possibility of pornography in the workplace that could create a hostile work environment?
Is it okay to place key loggers on machines on the network to capture everything the user types? Screen capture programs so you can see everything that’s displayed? Should users be informed that they’re being watched in this way?
Is it okay to read the documents and look at the graphics files that are stored on users’ computers or in their directories on the file server?
Remember that we’re not talking about legal questions here. A company may very well have the legal right to monitor everything an employee does with its computer equipment. We’re talking about the ethical aspects of having the ability to do so.

As a network administrator or security professional, you have rights and privileges that allow you to access most of the data on the systems on your network. You may even be able to access encrypted data if you have access to the recovery agent account. What you do with those abilities depend in part on your particular job duties (for example, if monitoring employee mail is a part of your official job description) and in part on your personal ethical beliefs about these issues.

The slippery slope

A common concept in any ethics discussion is the “slippery slope.” This pertains to the ease with which a person can go from doing something that doesn’t really seem unethical (such as scanning employees’ e-mail “just for fun”) to doing things that are increasingly unethical (such as making little changes in their mail messages or diverting messages to the wrong recipient).

In looking at the list of privacy issues above, it’s easy to justify each of the actions described. But it’s also easy to see how each of those actions could “morph” into much less justifiable actions. For example, the information you gained from reading someone’s e-mail could be used to embarrass that person, to gain a political advantage within the company, to get him/her disciplined or fired, or even for blackmail.

The slippery slope concept can also go beyond using your IT skills. If it’s okay to read other employees’ e-mail, is it also okay to go through their desk drawers when they aren’t there? To open their briefcases or purses?

Real world ethical dilemmas

What if your perusal of random documents reveals company trade secrets? What if you later leave the company and go to work for a competitor? Is it wrong to use that knowledge in your new job? Would it be “more wrong” if you printed out those documents and took them with you, than if you just relied on your memory?

What if the documents you read showed that the company was violating government regulations or laws? Do you have a moral obligation to turn them in, or are you ethically bound to respect your employer’s privacy? Would it make a difference if you signed a non-disclosure agreement when you accepted the job?

IT and security consultants who do work for multiple companies have even more ethical issues to deal with. If you learn things about one of your clients that might affect your other client(s), where does your loyalty lie?

Then there are money issues. The proliferation of network attacks, hacks, viruses, and other threats to their IT infrastructures have caused many companies to “be afraid, be very afraid.” As a security consultant, it may be very easy to play on that fear to convince companies to spend far more money than they really need to. Is it wrong for you to charge hundreds or even thousands of dollars per hour for your services, or is it a case of “whatever the market will bear?” Is it wrong for you to mark up the equipment and software that you get for the customer when you pass the cost through? What about kickbacks from equipment manufacturers? Is it wrong to accept “commissions” from them for convincing your clients to go with their products? Or what if the connection is more subtle? Is it wrong to steer your clients toward the products of companies in which you hold stock?

Another ethical issue involves promising more than you can deliver, or manipulating data to obtain higher fees. You can install technologies and configure settings to make a client’s network more secure, but you can never make it completely secure. Is it wrong to talk a client into replacing their current firewalls with those of a different manufacturer, or switching to an open source operating system – which changes, coincidentally, will result in many more billable hours for you – on the premise that this is the answer to their security problems?
think about it.........

Is it a satisfaction to hack and know about other people information?
if this case regarding to competitive competition?..you cheated man!!!









NETWORK SECURITY

Network security consists of the provisions and policies adopted by a network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: It secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.


Network security starts with authenticating the user, commonly with a username and a password. Since this requires just one detail authenticating the user name —i.e. the password, which is something the user 'knows'— this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g. a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g. a fingerprint or retinal scan).
Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevention system (IPS) help detect and inhibit the action of such malware. An anomaly-based intrusion detection system may also monitor the network and traffic for unexpected (i.e. suspicious) content or behavior and other anomalies to protect resources, e.g. from denial of service attacks or an employee accessing files at strange times. Individual events occurring on the network may be logged for audit purposes and for later high-level analysis.

Communication between two hosts using a network may be encrypted to maintain privacy.
Honeypots, essentially decoy network-accessible resources, may be deployed in a network as surveillance and early-warning tools, as the honeypots are not normally accessed for legitimate purposes. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the honeypot.

Security management

Security management for networks is different for all kinds of situations. A home or small office may only require basic security while large businesses may require high-maintenance and advanced software and hardware to prevent malicious attacks from hacking and spamming.












WEB CONFERENCE

Web conferencing refers to a service that allows conferencing events to be shared with remote locations. In general the service is made possible by Internet technologies, particularly on TCP/IP connections. The service allows real-time point-to-point communications as well as multicast communications from one sender to many receivers. It offers information of text-based messages, voice and video chat to be shared simultaneously, across geographically dispersed locations. Applications for web conferencing include meetings, training events, lectures, or short presentations from any computer.

Some web conferencing solutions require additional software to be installed (usually via download) by the presenter and participants, while others eliminate this step by providing physical hardware or an appliance. In general, system requirements depend on the vendor. Some web conferencing services vendors provide a complete solution while others enhance existing technologies. Most also provide a means of interfacing with email and calendaring clients in order that customers can plan an event and share information about it, in advance. A participant can be either an individual person or a group. System requirements that allow individuals within a group to participate as individuals (e.g. when an audience participant asks a question) depend on the size of the group. Handling such requirements is often the responsibility of the group. Most vendors also provide either a recorded copy of an event, or a means for a subscriber to record an event. Support for planning a shared event is typically integrated with calendar and email applications. The method of controlling access to an event is provided by the vendor. Additional value-added features are included as desired by vendors who provide them. Besides exceptions (e.g. Openmeetings, TokBox, WebHuddle, BigBlueButton), web conferencing services do not apply free software but proprietary software, see Comparison of web conferencing software.








   credit to MR.WIKI

Saturday, 19 May 2012

FTP

File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet. It is often used to upload web pages and other documents from a private development machine to a public web-hosting server. FTP is built on a client-server architecture and uses separate control and data connections between the client and the server. FTP users may authenticate themselves using a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that hides (encrypts) your username and password, as well as encrypts the content, you can try using a client that uses SSH File Transfer Protocol.

The first FTP client applications were interactive command-line tools, implementing standard commands and syntax. Graphical user interfaces have since been developed for many of the popular desktop operating systems in use today, including general web design programs like Microsoft Expression Web, and specialist FTP clients such as CuteFTP.



Communication and data transfer

The protocol is specified in RFC 959, which is summarized here.
The server responds over the control connection with three-digit status codes in ASCII with an optional text message. For example "200" (or "200 OK") means that the last command was successful. The numbers represent the code for the response and the optional text represents a human-readable explanation or request (e.g. <Need account for storing file>).An ongoing transfer of file data over the data connection can be aborted using an interrupt message sent over the control connection.


Illustration of starting a passive connection using port 21
FTP may run in active or passive mode, which determines how the data connection is established. In active mode, the client creates a TCP control connection to the server and sends the server the client's IP address and an arbitrary client port number, and then waits until the server initiates the data connection over TCP to that client IP address and client port number. In situations where the client is behind a firewall and unable to accept incoming TCP connections, passive mode may be used. In this mode, the client uses the control connection to send a PASV command to the server and then receives a server IP address and server port number from the server,which the client then uses to open a data connection from an arbitrary client port to the server IP address and server port number received. Both modes were updated in September 1998 to support IPv6. Further changes were introduced to the passive mode at that time, updating it to extended passive mode.

While transferring data over the network, four data representations can be used:
ASCII mode: used for text. Data is converted, if needed, from the sending host's character representation to "8-bit ASCII" before transmission, and (again, if necessary) to the receiving host's character representation. As a consequence, this mode is inappropriate for files that contain data other than plain text.
Image mode (commonly called Binary mode): the sending machine sends each file byte for byte, and the recipient stores the bytestream as it receives it. (Image mode support has been recommended for all implementations of FTP).
EBCDIC mode: use for plain text between hosts using the EBCDIC character set. This mode is otherwise like ASCII mode.
Local mode: Allows two computers with identical setups to send data in a proprietary format without the need to convert it to ASCII
For text files, different format control and record structure options are provided. These features were designed to facilitate files containing Telnet or ASA formatting.
Data transfer can be done in any of three modes:
Stream mode: Data is sent as a continuous stream, relieving FTP from doing any processing. Rather, all processing is left up to TCP. No End-of-file indicator is needed, unless the data is divided into records.
Block mode: FTP breaks the data into several blocks (block header, byte count, and data field) and then passes it on to TCP.
Compressed mode: Data is compressed using a single algorithm (usually run-length encoding).
[edit]Login
FTP login utilizes a normal usernames and password scheme for granting access. The username is sent to the server using the USER command, and the password is sent using the PASS command.[ If the information provided by the client is accepted by the server, the server will send a greeting to the client and the session will commence. If the server supports it, users may log in without providing login credentials, but the server may authorize only limited access for such sessions.
[edit]Anonymous FTP
A host that provides an FTP service may provide anonymous FTP access. Users typically log into the service with an 'anonymous' (lower-case and case-sensitive in some FTP servers) account when prompted for user name. Although users are commonly asked to send their email address in lieu of a password, no verification is actually performed on the supplied data. Many FTP hosts whose purpose is to provide software updates will provide anonymous logins.

NAT and firewall traversal

FTP normally transfers data by having the server connect back to the client, after the PORT command is sent by the client. This is problematic for both NATs and firewalls, which do not allow connections from the Internet towards internal hosts.For NATs, an additional complication is that the representation of the IP addresses and port number in the PORT command refer to the internal host's IP address and port, rather than the public IP address and port of the NAT.
There are two approaches to this problem. One is that the FTP client and FTP server use the PASV command, which causes the data connection to be established from the FTP client to the server. This is widely used by modern FTP clients. Another approach is for the NAT to alter the values of the PORT command, using an application-level gateway for this purpose.
[edit]Web browser support

Most common web browsers can retrieve files hosted on FTP servers, although they may not support protocol extensions such as FTPS.[3][12] When an FTP—rather than an HTTP—URL is supplied, the accessible contents on the remote server are presented in a manner that is similar to that used for other Web content. A full-featured FTP client can be run within Firefox in the form of an extension called FireFTP
[edit]Syntax
FTP URL syntax is described in RFC1738,[13] taking the form: ftp://[<user>[:<password>]@]<host>[:<port>]/<url-path>[13] (The bracketed parts are optional.) For example:
ftp://public.ftp-servers.example.com/mydirectory/myfile.txt
or:
ftp://user001:secretpassword@private.ftp-servers.example.com/mydirectory/myfile.txt
More details on specifying a username and password may be found in the browsers' documentation, such as, for example, Firefox  and Internet Explorer. By default, most web browsers use passive (PASV) mode, which more easily traverses end-user firewalls.

Security

FTP was not designed to be a secure protocol—especially by today's standards—and has many security weaknesses. In May 1999, the authors of RFC 2577 listed a vulnerability to the following problems:

  • Bounce attacks
  • Spoof attacks
  • Brute force attacks
  • Packet capture (sniffing)
  • Username protection
  • Port stealing

FTP is not able to encrypt its traffic; all transmissions are in clear text, and usernames, passwords, commands and data can be easily read by anyone able to perform packet capture (sniffing) on the network. This problem is common to many of the Internet Protocol specifications (such as SMTP, Telnet, POP and IMAP) that were designed prior to the creation of encryption mechanisms such as TLS or SSL. A common solution to this problem is to use the "secure", TLS-protected versions of the insecure protocols (e.g. FTPS for FTP, TelnetS for Telnet, etc.) or a different, more secure protocol that can handle the job, such as the SFTP/SCP tools included with most implementations of the Secure Shell protocol.

Secure FTP


There are several methods of securely transferring files that have been called "Secure FTP" at one point or another.

FTPS
Explicit FTPS is an extension to the FTP standard that allows clients to request that the FTP session be encrypted. This is done by sending the "AUTH TLS" command. The server has the option of allowing or denying connections that do not request TLS. This protocol extension is defined in the proposed standard: RFC 4217. Implicit FTPS is a deprecated standard for FTP that required the use of a SSL or TLS connection. It was specified to use different ports than plain FTP.

SFTP
SFTP, the "SSH File Transfer Protocol," is not related to FTP except that it also transfers files and has a similar command set for users. SFTP, or secure FTP, is a program that uses Secure Shell (SSH) to transfer files. Unlike standard FTP, it encrypts both commands and data, preventing passwords and sensitive information from being transmitted openly over the network. It is functionally similar to FTP, but because it uses a different protocol, you can't use a standard FTP client to talk to an SFTP server, nor can you connect to an FTP server with a client that supports only SFTP.

FTP over SSH (not SFTP)
FTP over SSH (not SFTP) refers to the practice of tunneling a normal FTP session over an SSH connection. Because FTP uses multiple TCP connections (unusual for a TCP/IP protocol that is still in use), it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to set up a tunnel for the control channel (the initial client-to-server connection on port 21) will protect only that channel; when data is transferred, the FTP software at either end will set up new TCP connections (data channels), which bypass the SSH connection and thus have no confidentiality or integrity protection, etc.
Otherwise, it is necessary for the SSH client software to have specific knowledge of the FTP protocol, to monitor and rewrite FTP control channel messages and autonomously open new packet forwardings for FTP data channels. Software packages that support this mode are:
Tectia ConnectSecure (Win/Linux/Unix) of SSH Communications Security's software suite
Tectia Server for IBM z/OS of SSH Communications Security's software suite
FONC (the GPL licensed)
Co:Z FTPSSH Proxy
FTP over SSH is sometimes referred to as secure FTP; this should not be confused with other methods of securing FTP, such as SSL/TLS (FTPS). Other methods of transferring files using SSH that are not related to FTP include SFTP and SCP; in each of these, the entire conversation (credentials and data) is always protected by the SSH protocol.







Kamsahamidaa~~~ 
MR..WIKI

Saturday, 12 May 2012

HTML

What is HTML?

HTML is a language for describing web pages.


  • HTML stands for Hyper Text Mark-up Language
  • HTML is not a programming language, it is a mark-up language
  • A mark-up language is a set of markup tags
  • HTML uses mark-up tags to describe web pages
HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>), within the web page content. HTML tags most commonly come in pairs like <h1> and </h1>, although some tags, known as empty elements, are unpaired, for example <img>. The first tag in a pair is the start tag, the second tag is the end tag (they are also called opening tags and closing tags). In between these tags web designers can add text, tags, comments and other types of text-based content.

The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. The browser does not display the HTML tags, but uses the tags to interpret the content of the page.
HTML elements form the building blocks of all websites. HTML allows images and objects to be embedded and can be used to create interactive forms. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. It can embed scripts in languages such as JavaScript which affect the behavior of HTML webpages.


During that class, we had learned how to build our own webpage...
It was fun to learn although we just had learned the most basic steps in learning html. 
thank you so much MR. RAZAK, now we can create and design our webpage easily.....











Tuesday, 1 May 2012

SEARCH ENGINE

A web search engine is designed to search for information on the World Wide Web. The search results are generally presented in a list of results often referred to as search engine results pages (SERPs). The information may consist of web pages, images, information and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.


A search engine operates in the following order:

Web crawling
Indexing
Searching

Web search engines work by storing information about many web pages, which they retrieve from the HTML itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link on the site. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. A query can be a single word. The purpose of an index is to allow information to be found as quickly as possible. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.
When a user enters a query into a search engine (typically by using keywords), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. The index is built from the information stored with the data and the method by which the information is indexed. Unfortunately, there are currently no known public search engines that allow documents to be searched by date. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. As well, natural language queries allow the user to type a question in the same form one would ask it to a human. A site like this would be ask.com.
The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This second form relies much more heavily on the computer itself to do the bulk of the work.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.








MR. WIKI...SARANGHAEYO~~~~



Web 2.0
- is a loosely defined intersection of web application features that facilitate participatory information sharing, interoperability, user-centered design, and collaboration on the World Wide Web. A Web 2.0 site allows users to interact and collaborate with each other in a social media dialogue as creators (prosumers) of user-generated content in a virtual community, in contrast to websites where users (consumers) are limited to the passive viewing of content that was created for them. Examples of Web 2.0 include social networking sites, blogs, wikis, video sharing sites, hosted services, web applications, mashups and folksonomies.

CHARACTERISTICS

Web 2.0 websites allow users to do more than just retrieve information. By increasing what was already possible in "Web 1.0", they provide the user with more user-interface, software and storage facilities, all through their browser. This has been called "Network as platform" computing. Users can provide the data that is on a Web 2.0 site and exercise some control over that data. These sites may have an "Architecture of participation" that encourages users to add value to the application as they use it. Some scholars have made the case that cloud computing is a form of Web 2.0 because cloud computing is simply an implication of computing on the Internet.
The concept of Web-as-participation-platform captures many of these characteristics. Bart Decrem, a founder and former CEO of Flock, calls Web 2.0 the "participatory Web" and regards the Web-as-information-source as Web 1.0.
The Web 2.0 offers all users the same freedom to contribute. While this opens the possibility for rational debate and collaboration, it also opens the possibility for "spamming" and "trolling" by less rational users. The impossibility of excluding group members who don’t contribute to the provision of goods from sharing profits gives rise to the possibility that rational members will prefer to withhold their contribution of effort and free ride on the contribution of others. This requires what is sometimes called radical trust by the management of the website. According to Best, the characteristics of Web 2.0 are: rich user experience, user participation, dynamic content, metadata, web standards and scalability. Further characteristics, such as openness, freedom and collective intelligence by way of user participation, can also be viewed as essential attributes of Web 2.0.

TECHNOLOGIES

A third important part of Web 2.0 is the social Web, which is a fundamental shift in the way people communicate. The social web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by:


Podcasting
Blogging
Tagging
Contributing to RSS
Social bookmarking
Social networking


The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to coin a flurry of 2.0s, including Library 2.0,Social Work 2.0, Enterprise 2.0, PR 2.0, Classroom 2.0, Publishing 2.0, Medicine 2.0, Telco 2.0, Travel 2.0, Government 2.0, and even Porn 2.0. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper "Library 2.0: The Challenge of Disruptive Innovation", Paul Miller argues
Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes that Library 2.0 means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others.
Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a "Library 2.0". Many of the other proponents of new 2.0s mentioned here use similar methods.
The meaning of web 2.0 is role dependent, as Dennis D. McDonalds noted. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to "end-run traditionally unresponsive I.T. department[s]."
There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students' different learning modes; the conflicts between ideas entrenched in informal on-line communities and educational establishments' views on the production and authentication of 'formal' knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line.
Marketing
For marketers, Web 2.0 offers an opportunity to engage consumers. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development, service enhancement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company employees have created wikis—Web sites that allow users to add, delete, and edit content — to list answers to frequently asked questions about each product, and consumers have added significant contributions. Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing.
Mainstream media usage of web 2.0 is increasing. Saturating media hubs—like The New York Times, PC Magazine and Business Week — with links to popular new web sites and services, is critical to achieving the threshold for mass adoption of those services.
Web 2.0 offers financial institutions abundant opportunities to engage with customers. Networks such as Twitter, Yelp and Facebook are now becoming common elements of multichannel and customer loyalty strategies, and banks are beginning to use these sites proactively to spread their messages. In a recent article for Bank Technology News, Shane Kite describes how Citigroup's Global Transaction Services unit monitors social media outlets to address customer issues and improve products. Furthermore, the FI uses Twitter to release "breaking news" and upcoming events, and YouTube to disseminate videos that feature executives speaking about market news.
Small businesses have become more competitive by using Web 2.0 marketing strategies to compete with larger companies. As new businesses grow and develop, new technology is used to decrease the gap between businesses and customers. Social networks have become more intuitive and user friendly to provide information that is easily reached by the end user. For example, companies use Twitter to offer customers coupons and discounts for products and services.
According to Google Timeline, the term Web 2.0 was discussed and indexed most frequently in 2005, 2007 and 2008. Its average use is continuously declining by 2–4% per quarter since April 2008.


for more application..you can click on this link!!


http://web2012.discoveryeducation.com/web20tools.cfm






THANK AGAIN MR.WIKI!!! XD