Project Management: The Contractor? Design Specification

Posted by on October 3, 2014

Project Management: The Contractor’s Design Specification

by

John Reynolds

If, after serious consideration of the customer’s enquiry specification, a contractor decides to prepare a tender, the contractor must obviously develop technical and commercial proposals for carrying out the work. These proposals will also provide a basis for the contractor’s own provisional design specification. It is usually necessary to translate the requirements defined by the customer’s specification into a form compatible with the contractor’s own normal practice, quality standards, technical methods and capabilities. The design specification will provide this link.

YouTube Preview Image

It is well known that there are often several different design approaches (proposed solutions) by which the desired end results of a project can be achieved. There are therefore likely to be considerable technical differences between the proposals submitted by different companies competing for the same order. Since the contract has been awarded, however, the number of possible design solutions should have been reduced to one: namely, that put forward by the chosen contractor. But within that general solution or concept there might still exist a considerable range of different possibilities for the detailed design and make-up of the project.

Taking just a tiny element of a technical project as an example, suppose that an automated plant is being designed in which there is a requirement to position a lever from time to time by automatic remote control. Any one or combination of a number of drive mechanisms might be chosen to move the lever. Possibilities include hydraulic, mechanical, pneumatic, or electromagnetic devices. Each of these possibilities could be subdivided into a further series of techniques. If, for example, an electromagnetic system were chosen, this might be a solenoid, a stepping motor or a servomotor. There are still further possible variations within each of these methods. The device chosen might have to be flameproof of magnetically shielded, or special in some other respect.

Every time the lever has been moved to a new position, several methods can be imagined for measuring and checking the result, including electro-optical, electrical, electronic, or mechanical. Very probably the data obtained from this positional measurement would be used in some sort of control or feedback systems to correct errors. There would, in fact, exist a very large number of permutations of all the possible ways of providing drive, measurement, and positional control. The arrangement eventually chosen might depend not so much on the optimum solution (if such exists) as on the contractor’s usual design practice or simply on the personal preference of the engineer responsible.

Without detailed design specifications, there would be a danger that a project could be priced and sold against one set of designs solutions, but actually executed using a different, more costly, approach. This danger is very real. The danger is greater when the period between submitting a quotation and actually receiving the order exceeds a few weeks, allowing time for the original intentions to be forgotten. Thus, it is important to develop quality design specifications when the project is first proposed and be able to stick close to those specifications through the life cycle of the project.

John Reynolds has been a practicing project manager for nearly 20 years and is the editor of an informational website

rating project management software products

. For more information on project management and project management software, visit

Project Management Software Web

.

Article Source:

ArticleRich.com

Topics: Management System | No Comments »

Yahoo! News

Posted by on September 25, 2014

Yahoo News originated as a pure Internet-based news aggregator by Yahoo. It categorized news into “Top Stories”, “U.S. National”, “World”, “Business”, “Entertainment”, “Science”, “Health”, “Weather”, “Most Popular”, “News Photos”, “Op/Ed”, and “Local News,” a format it still largely uses today.

Articles in Yahoo News originally came from news services, such as Associated Press, Reuters, Agence France-Presse (AFP), Fox News, ABC News, NPR, USA Today, CNN.com, CBC News, Seven News, and BBC News.

In 2001, Yahoo News launched the first “most-emailed” page on the web.[1] The idea was created and implemented by Yahoo software engineer Tony Tam.[2]

Yahoo allowed comments for news articles until December 19, 2006, when commentary was disabled. Comments were re-enabled on March 2, 2010.[3] Comments were temporarily disabled between December 10, 2011, and December 15, 2011, due to glitches.[citation needed]

In June 2011, Yahoo News was rebuilt using an internal content management system called the Yahoo Publishing Platform.[4] The same platform now powers Yahoo News in the following regions and languages: Argentina,[5] Brazil,[6] Canada,[7] English,[8] Chile,[9] Colombia,[10] Mexico,[11] Peru,[12] Spanish (US),[13] English (US),[14]], Venezuela,[15] Hong Kong,[16] English (India),[17] Marathi,[18] Tamil,[19] Indonesia,[20] Malaysia,[21] Philippines,[22] Singapore,[23] Taiwan,[24] France,[25] Germany,[26] Italy,[27] Spain,[28] and the United Kingdom.[29]

Since 2011, Yahoo has expanded its focus to include original content, as part of its plans to become a major media organization.[30] Veteran journalists, including Walter Shapiro and Virginia Heffernan, were hired, while the website had a correspondent in the White House press corps for the first time in February 2012.[30][31] Alexa lists Yahoo News as one of the world’s top news sites.[32]

On August 29, 2012, Yahoo News fired Washington bureau chief David Chalian after he made a disparaging comment about Republican Presidential nominee Mitt Romney and his wife Ann Romney during the 2012 Republican National Convention in Tampa, Florida. With Hurricane Isaac entering Louisiana, Chalian suggested that “They’re not concerned at all. They’re happy to have a party with black people drowning”.[33]

According to an interview with Yahoo’s CEO Marissa Mayer Yahoo News will start displaying Twitter updates alongside news on both Desktop and Mobile in the United States in May 2013.[34]

In November 2013, Mayer announced Yahoo had hired former CBS Evening News anchor Katie Couric as Global Anchor of Yahoo News.[35]

Yahoo! developed an application that collects the most-read news stories from different categories for iOS and Android. The app was one of the winners of 2014 Apple Design Awards.[36]

In April 2009, Yahoo News ranked second among global news sites in terms of users from the United States, after msnbc.com and ahead of CNN, according to Nielsen Ratings.[37]

Topics: Uncategorized | No Comments »

Head of US Minerals Management Service resigns following BP oil spill response

Posted by on September 22, 2014

Thursday, May 27, 2010 

After reports that US President Barack Obama has fired the director of the Minerals Management Service, Elizabeth “Liz” Birnbaum, the Interior Department revealed that she instead resigned “on her own volition”. The resignation occurred amidst growing criticism of the federal government’s response to the Deepwater Horizon drilling rig explosion and the agency’s oversight over offshore drilling. 

Birnbaum, who became director in June 2009, was expected to testify before a subcommittee of the House of Representatives today with Interior Secretary Ken Salazar. She was not present when Salazar began to speak, however. Salazar said in a statement: “She is a good public servant. She resigned today on her own terms and on her own volition. I thank her for her service and wish her the very best.”

News agencies had reported that Birnbaum was forced out of office, and Obama was expected to officially announce the supposed firing later today in a news conference along with discussing an Interior Department report on the explosion. Birnbaum observed in her letter of resignation that Salazar “will be requiring three new leaders for the Office Natural Resources Revenue, the Bureau of Energy Management and the Bureau of Safety and Environmental Enforcement” as the entire Minerals Management Service is reorganized.

Topics: Uncategorized | No Comments »

CIA contractor released from Pakistan

Posted by on September 21, 2014

Thursday, March 17, 2011 

Raymond Davis, a contractor for the CIA, has been released from Pakistan after a ruling by a Pakistani court. He was detained after killing two citizens carrying weapons on January 27, 2011. Davis was freed after agreements to pay “blood money” as compensation for the two lives were handled.

Blood money, defined as compensation paid by a murderer, is required under Islamic law. CIA official George Little said, “When issues arise, it’s our standing practice to work through them. That’s the sign of a healthy partnership—one that’s vital to both countries.” John Kerry, a Massachusetts senator, said, “This was a very important and necessary step for both of our countries to be able to maintain our relationship.”

After the ruling, a group of protestors demonstrated against the release in Lahore, Pakistan, and the United States government flew Davis out of the country. Asad Manzoor Butt, the attorney for the victims, said that the money was paid after hours of discussion with Americans.

According to a US official, the Justice Department will investigate the incident. Anonymous sources said that Pakistan’s government footed the bill, although the United States may be required to pay them back. He reportedly shot the two armed men as they attempted to steal from him in Lahore. Davis was accused of two accounts of murder and carrying unlawful weapons; the trial took place early Wednesday. The presence of international operatives in Pakistan has angered many citizens, resulting in protests around the country in the past weeks.

United States Secretary of State Hillary Clinton said in Cairo, Egypt that the U.S. did not pay the compensation. After being asked by reporters who or what paid the families of the victims, she responded, “The families of the victims pardoned Mr. Davis and we’re grateful for their decision.”

Topics: Uncategorized | No Comments »

Instant Traffic To A Website And Free Advertising : Traffup.Net

Posted by on September 20, 2014

Read An Opinion On:

Instant Traffic To a Website and Free Advertising : Traffup.net

by

traffup.net

As someone who find new ways to increase website traffic effectively (by effectively I mean useful traffic), I keep on finding new programs and initiatives in the market which could increase my website s traffic quickly and effectively.

I recently used the current heartthrob Twiends . It is a very popular application on the web and is considered a very effective way of increasing your twitter followers. I loved the simplicity of that website and the ease with which Twiends increases your followers. They also have a section for websites and facebook where you can increase your website traffic and boost your facebook page likes respectively.

YouTube Preview Image

I was blown by Twiends twitter and facebook section, but the website area seems to be there just for the sake of it. No matter your website get a lot of hits, but these hits are not effective, as opening a website for just a few seconds count. The people don t open the website, they don t navigate it, they don t care about the category of the website which they are about to view. All they care about is the credits. And no one else but the twiends team is responsible for the thing. People buy credits and put their website so that people can view them and appreciate the work you have done by navigating the website further. But twiends doesn t make it possible.

This weakness is not present in Traffup. A website, completely dedicated to boost your website traffic. Its design is more sophisticated and solid than twiends and gives you a very simple user interface with speedy registration. There is no confusion with the website concentrating only on increasing your website traffic.

When you sign up to Traffup, you get 100 free points which you can use to advertise your website and showcase it. The person viewing your website gets some point for viewing it and thus can utilize those points to promote his website. But you have to wait for the website to load completely in order to get the points, which is a welcoming sign as the main purpose of bringing traffic is to make sure maximum people view the contents of your website and if people don t view the website completely, then the analytics figures are just numbers and cannot be put to any use. This is where Traffup scores. It lets you view the website. You can browse the websites as per your taste and preferences, as websites are placed in different categories.

One more thing about Traffup.net, that I want to point out, is the quality of websites added in traffup.net. Professional developers register on traffup in order to showcase their website. Getting cheap traffic is not a long term approach and I am happy to see that Traffup.net is not going for it.

So what you get at Traffup.net is the high quality traffic and that too in abundance. The website is just concentrating on the website section, which is both good and bad. But I am sure with the coming time, they would expand their working.

For more information about Traffup or

to increase website traffic, just visit http://traffup.net And also on http://hubpages.com/hub/Does-Traffup-generates-instant-Traffic-on-your-website

Article Source:

ArticleRich.com

Topics: Packaging | No Comments »

Ontario Votes 2007: Interview with Progressive Conservative candidate Dan McCreary, Brant

Posted by on September 19, 2014

Tuesday, October 2, 2007 

Dan McCreary is running for the Progressive Conservative in the Ontario provincial election, in the Brant riding. Wikinews’ Nick Moreau interviewed him regarding his values, his experience, and his campaign.

Stay tuned for further interviews; every candidate from every party is eligible, and will be contacted. Expect interviews from Liberals, Progressive Conservatives, New Democratic Party members, Ontario Greens, as well as members from the Family Coalition, Freedom, Communist, Libertarian, and Confederation of Regions parties, as well as independents.

Why have you chosen to involve yourself in the political process? Why did you choose to run in this constituency?

What prior political experience do you have? What skills and insight can you bring to office, from other non-political positions you may have held?

Which of your competitors do you expect to pose the biggest challenge to your candidacy? Why?

What makes you the most desirable of all candidates running in the riding?

What do you feel are the three most important issues to voters in your riding? Are these the same top three issues that are most important to you? What would you do to address these issues?

What should be the first order of business in the 39th Legislative Assembly?

Are the property taxes in your riding at a fair level for the amount of services received in the municipality?

How can the province lead the way in stimulating job creation?

What are your views on the mixed member proportional representation (MMP) referendum?

What role, if any, does “new media” play in your campaign, and the campaign of your party? (websites, blogs, Facebook, YouTube videos, etc) Do you view it as beneficial, or a challenge?

Of the decisions made by Ontario’s 38th Legislative Assembly, which was the most beneficial to your this electoral district? To the province as a whole? Which was least beneficial, or even harmful, to your this riding? To the province as a whole?

Topics: Uncategorized | No Comments »

DoubleClick

Posted by on September 18, 2014

DoubleClick is a subsidiary of Google which develops and provides Internet ad serving services. Its clients include agencies, marketers (Universal McCann, AKQA etc.) and publishers who serve customers like Microsoft, General Motors, Coca-Cola, Motorola, L’Oréal, Palm, Inc., Apple Inc., Visa USA, Nike, Carlsberg among others. DoubleClick’s headquarters is in New York City, United States.

DoubleClick was founded in 1996 by Kevin O’Connor and Dwight Merriman. It was formerly listed as “DCLK” on the NASDAQ, and was purchased by private equity firms Hellman & Friedman and JMI Equity in July 2005. In March 2008, Google acquired DoubleClick for US$3.1 billion. Unlike many other dot-com companies, it survived the bursting of the dot-com bubble. Today, it focuses on uploading ads and reporting their performance.

DoubleClick was founded as one of the earliest known Application Service Provider (ASP) for internet “ad-serving”, primarily banner ads. After an IPO on the NASDAQ under the “DCLK” ticker symbol in early 1998, the company was associated with an internet traffic report including Yahoo!, AOL, Alta Vista and Excite where the company was listed within the top 10 internet websites in the world—when it was delivering as many ad impressions at the time as these early major internet properties were delivering page views. Its DoubleClick DART (Dynamic Advertising Reporting & Targeting) ASP/SaaS ad-serving technology allowed clear targeting and reporting of ad-serving per media property for websites within its network and technology sectors.

In 1999, at a cost of US $1.7 billion, DoubleClick merged with the data-collection agency Abacus Direct, which works with offline catalog companies. This raised fears that the combined company would link anonymous Web-surfing profiles with personally identifiable information (name, address, telephone number, e-mail, address, etc.) collected by Abacus. This merger made waves and was heavily criticized by privacy organizations. It was discovered that sensitive financial information users entered on a popular Web site that offered financial software was being sent to DoubleClick, which delivered the advertisements. Much of this controversy was generated by statements made by Jason Catlett of Junkbusters, who claimed that DoubleClick was doing and/or intended to do things that it had never mentioned or included in any planned or announced service. The Federal Trade Commission launched an investigation into DoubleClick’s collection and compilation of personal information shortly after the Abacus acquisition, in reaction to which DoubleClick announced that it would not merge the DoubleClick and Abacus databases. The FTC concluded its investigation in early 2001.[1]

In April 2005, Hellman & Friedman, a San Francisco-based private equity firm, announced its intent to acquire the company and operate it as two separate divisions with two separate CEOs for TechSolutions and Data Marketing. The deal was closed in July 2005. Hellman & Friedman announced in December 2006 the sale of Abacus to Epsilon Interactive, whose parent company is AllianceData Systems Corporation.

Google announced on April 13, 2007 that it had come to a definitive agreement to acquire DoubleClick for US $3.1 billion in cash.[2]

US lawmakers have investigated possible privacy and antitrust implications of the proposed acquisition.[3] At hearings, representatives from Microsoft warned of a potential monopolistic effect.[4] On December 20, 2007, the FTC approved Google’s purchase of DoubleClick from its owners Hellman & Friedman and JMI Equity, saying, “After carefully reviewing the evidence, we have concluded that Google’s proposed acquisition of DoubleClick is unlikely to substantially lessen competition.”[5] European Union regulators followed suit on March 11, 2008. Google completed the acquisition later that day.

On April 2, 2008, Google announced it would cut 300 jobs at DoubleClick due to organizational redundancies. Selected employees would be matched within the Google organization as per position and experience.[6]

DoubleClick is often linked with the controversy over spyware because browser HTTP cookies are set to track users as they travel from website to website and record which commercial advertisements they view and select while browsing.[7]

DoubleClick has also been criticized for misleading users by offering an opt-out option that is insufficiently effective. According to a San Francisco IT consulting group, although the opt-out option affects cookies, DoubleClick does not allow users to opt out of IP address-based tracking.[8]

DoubleClick with MSN were shown serving malware via drive-by download exploits by a group of attackers for some time in December 2010.[9]

DoubleClick offers technology products and services that are sold primarily to advertising agencies and media companies to allow clients to traffic, target, deliver, and report on their interactive advertising campaigns. The company’s main product line is formally known as DART, which is designed for advertisers and publishers.

DART automates the administration effort in the ad buying cycle for advertisers (DoubleClick for Advertisers, or DFA) and the management of ad inventory for publishers (DoubleClick for Publishers, or DFP). It is intended to increase the purchasing efficiency of advertisers and to minimize unsold inventory for publishers.

DART Enterprise is the rebranded version of NetGravity AdServer, which DoubleClick acquired with its purchase of NetGravity in 1999. Unlike the DFA and DFP products which are both Software as a Service SaaS products, DART Enterprise is a standalone product running on Linux.

In 2004, DoubleClick acquired Performics.[10] Performics offers affiliate marketing, search engine optimization, and search engine marketing solutions. The marketing solutions were integrated into the core DART system and rebranded DART search.

DoubleClick Advertising Exchange (released Q2 2007) attempts to go even further by connecting both media buyers and sellers on an advertising exchange much like a traditional stock exchange.

DoubleClick targets along various criteria. Targeting can be accomplished using IP addresses, business rules set by the client or by reference to information about users stored within cookies on their machines. Some of the types of information collected are:

In addition, the cookie information may be used to target ads based on the number of times the user has been exposed to any given message. This is known as “frequency capping”.

Topics: Uncategorized | No Comments »

Top-down and bottom-up design

Posted by on September 18, 2014

Top-down and bottom-up are both strategies of information processing and knowledge ordering, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice, they can be seen as a style of thinking and teaching.

A top-down approach (also known as stepwise design and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional sub-systems. In a top-down approach an overview of the system is formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of “black boxes”, these make it easier to manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed enough to realistically validate the model. Top down approach starts with the big picture. It breaks down from there into smaller segments.[1]

A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems sub-systems of the emergent system. Bottom-up processing is a type of information processing based on incoming data from the environment to form a perception. Information enters the eyes in one direction (input), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output). In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a “seed” model, whereby the beginnings are small but eventually grow in complexity and completeness. However, “organic strategies” may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.

During the design and development of new products, designers and engineers rely on both a bottom-up and top-down approach. The bottom-up approach is being utilized when off-the-shelf or existing components are selected and integrated into the product. An example would include selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it would fit properly in the receiving components.[2] For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment, etc.), such as a space-suit, a more top-down approach is taken and almost everything is custom designed. However, when it’s more important to minimize cost and increase component availability, such as with manufacturing equipment, a more bottom-up approach would be taken, and as many off-the-shelf components (bolts, gears, bearings, etc.) would be selected as possible. In the latter case, the receiving housings would be designed around the selected components.

In the software development process, the top-down and bottom-up approaches play a key role.

Top-down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top-down approaches are implemented by attaching the stubs in place of the module. This, however, delays testing of the ultimate functional units of a system until significant design is complete. Bottom-up emphasizes coding and early testing, which can begin as soon as the first module has been specified. This approach, however, runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of the bottom-up approach.[3]

Top-down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top-down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top down programming was not strictly what he promoted. Top-down methods were favored in software engineering until the late 1980s,[3] and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be utilized.

Modern software design approaches usually combine both top-down and bottom-up approaches. Although an understanding of the complete system is usually considered necessary for good design, leading theoretically to a top-down approach, most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom-up flavor. Some design approaches also use an approach where a partially functional system is designed and coded to completion, and this system is then expanded to fulfill all the requirements for the project

Top-down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized sub-routines eventually will perform actions so simple they can be easily and concisely coded. When all the various sub-routines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower level work can be self-contained. By defining how the lower level abstractions are expected to integrate into higher level ones, interfaces become clearly defined.

In a bottom-up approach, the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a “seed” model, whereby the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses “objects” to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with LEGO. Engineers call this piece part design.

This bottom-up approach has one weakness. Good intuition is necessary to decide the functionality that is to be provided by the module. If a system is to be built from existing system, this approach is more suitable as it starts from some existing modules.

Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler.

Bottom-up parsing is a strategy for analyzing unknown data relationships that attempts to identify the most fundamental units first, and then to infer higher-order structures from them. Top-down parsers, on the other hand, hypothesize general parse tree structures and then consider whether the known fundamental structures are compatible with the hypothesis. See Top-down parsing and Bottom-up parsing.

Top-down and bottom-up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 in order to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom-up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly.

The top-down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category.

Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches utilize the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases.

These terms are also employed in neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing.[4] Typically sensory input is considered “down”, and higher cognitive processes, which have more information from other sources, are considered “up”. A bottom-up process is characterized by an absence of higher level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Beiderman, 19).[3]

According to Psychology notes written by Dr. Charles Ramskov, a Psychology professor at De Anza College, Rock, Neiser, and Gregory claim that top-down approach involves perception that is an active and constructive process.[5] Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to Theoretical Synthesis, “when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach.”[6]

Conversely, Psychology defines bottom-up processing as an approach wherein there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom-up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus.[7] Theoretical Synthesis also claims that bottom-up processing occurs “when a stimulus is presented long and clearly enough.”[6]

Cognitively speaking, certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom-up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top-down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom-up connections.[6] Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top-down influence.[8]

The study of visual attention provides an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom-up fashion—your attention was not contingent upon knowledge of the flower; the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object you are looking for, it is salient. This is an example of the use of top-down information.

In cognitive terms, two thinking approaches are distinguished. “Top-down” (or “big chunk”) is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. “Bottom-up” (or “small chunk”) cognition is akin to focusing on the detail primarily, rather than the landscape. The expression “seeing the wood for the trees” references the two styles of cognition.[9]

In management and organizational arenas, the terms “top-down” and “bottom-up” are used to indicate how decisions are made.

A “top-down” approach is one where an executive, decision maker, or other person or body makes a decision. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, a structure in which decisions either are approved by a manager, or approved by his or her authorized representatives based on the manager’s prior guidelines, is top-down management.

A “bottom-up” approach is one that works from the grassroots—from a large number of people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a “bottom-up” decision. Positive aspects of top-down approaches include their efficiency and superb overview of higher levels. Also, external effects can be internalized. On the negative side, if reforms are perceived to be imposed ‘from above’, it can be difficult for lower levels to accept them (e.g. Bresser Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g. Dubois 2002). A bottom-up approach allows for more experimentation and a better feeling for what is needed at the bottom.

Both approaches can be found in the organization of states, this involving political decisions.

In bottom-up organized organizations, e.g. ministries and their subordinate entities, decisions are prepared by experts in their fields, which define, out of their expertise, the policy they deem necessary. If they cannot agree, even on a compromise, they escalate the problem to the next higher hierarchy level, where a decision would be sought. Finally, the highest common principal might have to take the decision. Information is in the debt of the inferior to the superior, which means that the inferior owes information to the superior. In the effect, as soon as inferiors agree, the head of the organization only provides his or her “face? for the decision which their inferiors have agreed upon.

Among several countries, the German political system provides one of the purest forms of a bottom-up approach. The German Federal Act on the Public Service provides that any inferior has to consult and support any superiors, that he or she – only – has to follow “general guidelines” of the superiors, and that he or she would have to be fully responsible for any own act in office, and would have to follow a specific, formal complaint procedure if in doubt of the legality of an order.[10] Frequently, German politicians had to leave office on the allegation that they took wrong decisions because of their resistance to inferior experts’ opinions (this commonly being called to be “beratungsresistent”, or resistant to consultation, in German). The historical foundation of this approach lies with the fact that, in the 19th century, many politicians used to be noblemen without appropriate education, who more and more became forced to rely on consultation of educated experts, which (in particular after the Prussian reforms of Stein and Hardenberg) enjoyed the status of financially and personally independent, indismissable, and neutral experts as Beamte (public servants under public law).[11]

The experience of two dictatorships in the country and, after the end of such regimes, emerging calls for the legal responsibility of the “aidees of the aidees” (Helfershelfer) of such regimes also furnished calls for the principle of personal responsibility of any expert for any decision made, this leading to a strengthening of the bottom-up approach, which requires maximum responsibility of the superiors. A similar approach can be found in British police laws, where entitlements of police constables are vested in the constable in person and not in the police as an administrative agency, this leading to the single constable being fully responsible for his or her own acts in office, in particular their legality.

In the opposite, the French administration is based on a top-down approach, where regular public servants enjoy no other task than simply to execute decisions made by their superiors. As those superiors also require consultation, this consultation is provided by members of a cabinet, which is distinctive from the regular ministry staff in terms of staff and organization. Those members who are not members of the cabinet are not entitled to make any suggestions or to take any decisions of political dimension.

The advantage of the bottom-up approach is the level of expertise provided, combined with the motivating experience of any member of the administration to be responsible and finally the independent “engine” of progress in that field of personal responsibility. A disadvantage is the lack of democratic control and transparency, this leading, from a democratic viewpoint, to the deferment of actual power of policy-making to faceless, if even unknown, public servants. Even the fact that certain politicians might “provide their face” to the actual decisions of their inferiors might not mitigate this effect, but rather strong parliamentary rights of control and influence in legislative procedures (as they do exist in the example of Germany).

The advantage of the top-down principle is that political and administrative responsibilities are clearly distinguished from each other, and that responsibility for political failures can be clearly identified with the relevant office holder. Disadvantages are that the system triggers demotivation of inferiors, who know that their ideas to innovative approaches might not be welcome just because of their position, and that the decision-makers cannot make use of the full range of expertise which their inferiors will have collected.

Administrations in dictatorships traditionally work according to a strict top-down approach. As civil servants below the level of the political leadership are discouraged from making suggestions, they use to suffer from the lack of expertise which could be provided by the inferiors, which regularly leads to a breakdown of the system after an few decades. Modern communist states, which the People’s Republic of China forms an example of, therefore prefer to define a framework of permissible, or even encouraged, criticism and self-determination by inferiors, which would not affect the major state doctrine, but allows the use of professional and expertise-driven knowledge and the use of it for the decision-making persons in office.

Both top-down and bottom-up approaches exist in public health. There are many examples of top-down programs, often run by governments or large inter-governmental organizations (IGOs); many of these are disease-specific or issue-specific, such as HIV control or Smallpox Eradication. Examples of bottom-up programs include many small NGOs set up to improve local access to healthcare. However, a lot of programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary health-care.

Often, the École des Beaux-Arts school of design is said to have primarily promoted top-down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.

By contrast, the Bauhaus focused on bottom-up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the woodpanel carving and furniture design).

In ecology, top-down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. In other words, such ecosystems are not controlled by productivity of the kelp but rather a top predator.

Bottom up control in ecosystems refers to ecosystems in which the nutrient supply and productivity and type of primary producers (plants and phytoplankton) control the ecosystem structure. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.

There are many different examples of these concepts. It is common for populations to be influenced by both types of control.

Topics: Uncategorized | No Comments »

Inflatable Boats, The Different Types

Posted by on September 18, 2014

Read An Opinion On:

Inflatable Boats, The Different Types

by

Donald Hammas

Here is some information on the different types of inflatable boats on the market today. I hope this information allows you to select an inflatable boat best suited for you. The original inflatable boat,still sold today,consist of a soft floor and inflatable collar. These are called dinghys and are used by sailboat owners and small power boat owners. They don’t have keels and usually allow for only small outboards up to 2.5 hp.

The next type of inflatable boat is the roll up. These are available in both high pressure air deck models and woodfloor deck models. These boats have keels and are considered sport boats,with enough horsepower these boats can come up on plane and achieve speeds up to 15 mph. These boats are best suited for sail and power boat owners who want to store the inflatable rolled up when not in use.When you want something quicker to assemble and disassemble the high pressure air deck is the right choice of the two types discussed. If you are not planning on disassembling the inflatable very often you might consider the wood deck model. They are little heavier but the wood deck can be better for activities such as fishing.

YouTube Preview Image

The next type is the inflatable kayaks or KaBoats which take up as much space as a medium size bag when deflated, and can be stored almost anywhere, such as a car trunk, closet, or even an empty corner of your apartment. But when inflated, with a hand or electric pump, in a matter of minutes, an inflatable kayak or kaboat can take you or your friend for a day of fun and exploration. Inflatable kayaks provide smooth and easy paddling across that lake, river or bay you always wanted to go to explore. And best of all, when you re done, just let air out and roll everything back into the bag, till next time.

The newest type is the inflatable KaBoat. It is a crossover between inflatable kayak and inflatable boats, KaBoats represents the best of both worlds. Modeled after narrow Asian Dragon boats, and due to its narrow profile, KaBoat can go faster then standard inflatable boat with lower rating engines. Extremely portable, they will fit in medium size bag. Now you can go on vacation and take a small KaBoat with you along with optional small electric or gas engine. Main benefit is that if you will get tired paddling, then use engine to get back ashore. KaBoats are very stable. You can stand and do fly fishing or get to that narrow spot where other boats can’t go to get best fishing. KaBoat can also be used as a dinghy for yacht or sailboat.

Well I have given you a quick guide on the different kinds of inflatable boats. The last two items I want to share with you is the two major types of boat materials that are used in the making of the tube collar. Most of the boats sold in California, Florida and warmer climates are made from a combination of hypalon coating over a neopreme and vinyl material. The hypalon coated material offers the most UV resistance and carries the longest warranty, 10 years from some manufacturers. The other most popular coating is PVC, this is a type of vinyl with the PVC coating for UV protection, it is heavy duty and an excellent material for inflatables. It is somewhat less expensive than hypalon coatings, and it can be machine glued or hand glued. The major difference between the hypalon and PVC coatings is the ability of hypalon to hancle UV rays better than the PVC coating. Hypalon is the best coating for any boat that is going to be stored outside on the deck of a yacht or on a davit system. The PVC material boats are best suited to the roll up boats I mentioned earlier.

Please feel free to visit our website for more information and pricing on several types and brands of inflatable boats.

Please visit

Don’s Inflatable Boats

and see the difference between the types of

inflatable boats

we have. See how they can make this summer one to remember!

Article Source:

ArticleRich.com

Topics: Fishing Charters | No Comments »

Man drowns in Texas lake after falling from boat

Posted by on September 18, 2014

Wednesday, April 24, 2013 

Rescue personnel have found the body of a 46 year-old man, who drowned in Lake Palestine, in northeast Texas, while fishing on Sunday. Rescue crews began searching the lake for a missing person on Sunday evening around 8:30pm (local time). A police official told reporters the man was found just before midnight on Sunday evening.

The man, identified as Fredrick Perkins, was a resident of the city of Tyler. Perkins was fishing with a woman, when they fell out of their boat. Reports indicate Perkins was not wearing a life vest, although the woman was. She said he fell into the water when attempting to bring in a stringer of fish.

Onlookers stated it was unclear if the boat overturned or if the man simply fell from the boat. Witnesses reported the boat was about 300 yards from the shore when the man fell into the water. Officials used a sonar device to locate Perkins’ body. His remains were found along the bottom of the lake at 11:53pm.

Topics: Uncategorized | No Comments »

Next Entries »