I need a UPS – which one is best?

By admin, 13 maja, 2014, No Comment

By Elrica Quick, APC Product Specialist at DCC

Stability and availability of electricity is a growing problem in South Africa. Therefore, an Uninterrupted Power Supply (UPS) has become a vital piece of equipment for businesses of all sizes, from large enterprise to the Small and Home Office (SOHO). However, while enterprise businesses recognise the need for such equipment, many small businesses do not understand the consequences of not having a UPS in place. In addition, understanding which UPS to purchase can be seen as a challenging task. However, understanding the need for a UPS solution, as well as the needs and requirements of the business, are key to avoiding the pitfalls of power problems and can ease the pains when purchasing a UPS for your business.

When power fails, supply is abruptly cut off to all electronic devices. In the case of servers and computers, this instant cut off can lead to lost and corrupt data. Lost and corrupt data can be crippling to a business to the extent of the business closing. In addition, the effects of surges and spikes can cause damage to the equipment itself, shortening its lifespan and resulting in businesses having to replace equipment.

A UPS helps to mitigate these problems, providing users with a short time frame to close programs and safely shut down computers and servers when a power outage occurs which prevents the corruption and loss of data. UPS solutions with functionality of extended battery packs can also be used to enable businesses to continue working through a power failure, helping to minimise the impact on productivity.

The need for a UPS in all sizes of business is clear. However, many users remain unsure as to which UPS to purchase, due to the wide variety of available solutions. The correct UPS will depend on two main factors. By answering these simple questions, users will be able to identify the specification, size and topology of the UPS required.

  • What is the runtime you require the UPS to run during a power outage?
  • What type of equipment do you want to connect onto the UPS?

The first thing to establish is which equipment you need to connect to the UPS. This will not only help you determine which size of UPS to buy, but also what type of UPS is required. The type of equipment to be connected to the UPS will determine the load that the UPS will have to carry. This will be the combined electricity consumption of all of the equipment. Once this has been calculated, a UPS with an Output Watt Capacity of 20-25% higher than the total load should be chosen. This will assist in catering for future growth, optimum running of the UPS and ensuring the UPS does not need to be replaced every time new equipment is purchased.

It is also important to determine the duration for which the UPS is needed to support the connected equipment/load – how long do you need to be able to run equipment off the UPS? Do you require the UPS for protecting equipment, closing of programs and shutting down of computer equipment, or for surge protection and additional backup or runtime? The longer the runtime required, the more external battery packs will be required.

UPS solutions also come in a variety of topologies and sizes, so it is important to consider the key features of the UPS. Automatic Voltage Regulation (AVR) and power conditioning help to ensure the supply of power is consistently clean and stable smoothing out dips and spikes. Automatic self-test functionality periodically checks the efficiency of the battery and provides notification when the battery is nearing the end of its lifespan. Predictive failure notifications will alert users if the UPS is going to fail. These features help to ensure the UPS does not stop working without the user being aware of it.

Certain UPS solutions have status indicators to enhance ease of use, and have user-replaceable batteries, amongst other features. It is important to examine the features offered and map these to the needs of your business. For enhanced functionality, certain UPSs are capable of allowing for the inclusion of a network management card. This enables users to manage and monitor one or multiple UPSs solutions through the Local Area Network (LAN) / Wide Area Network (WAN). In addition, certain UPS solutions feature auto shut down software, this software will automatically shut down your equipment safely, easing the user having to be at the equipment to shut it down.

In light of the various features, sizes, topologies and functionalities of a UPS, selecting the right one can be a challenging task. There is no ‘one size fits all’ solution, and speaking to an expert on power solutions can be beneficial, particularly for organisations that are new to the concept of a UPS. In addition, specific technical expertise is required to install a UPS, depending on the size of the unit. Units up to a 3 Kilovolt Amps (kVA) can be plugged into a normal 15Amp wall socket and do not require installation by a qualified electrician. However, a qualified electrician will be required for units greater than 3kVA and will need to be connected directly to a Distribution Board (DB). Three-phase UPSs used in larger organisations will require specialised technicians to commission and start-up of the UPS. Expert assistance can prove incredibly helpful in many instances.

UPS solutions are vital for any business today. Unfortunately, many businesses only realise the need for such a solution once damage has already been done, by which time it is often too late. UPS solutions are typically a grudge purchase, however, the cost of such a solution is far less than the cost of continually replacing damaged equipment and lost productivity. In addition, a UPS can help to reduce the headaches that come with insurance claims. UPS systems offer many benefits, and choosing the right one to meet business needs will ensure that full value can be leveraged from a small investment.

Self-service key for successful big data analytics

By admin, 13 maja, 2014, No Comment

Big data is not a silver bullet, however, when applied to appropriate use cases, it can bring significant value to banks and insurance companies. This was the core message from the 2014 Big Data in Financial Services conference that was chaired by Gary Allemann, MD of Master Data Management, in Dubai earlier this month.

Most credit card users will, at some stage, have received a call from their bank to verify a transaction. Maybe the transaction amount was much larger than usual, maybe the goods purchased were unusual, or the transaction was made in an unusual location. This is an example of big data analytics for fraud detection.

Another common use for big data in financial services is to better understand customer behaviour – whether to reduce churn or to cross sell. An integrated view of customer activities across channels can deliver insight as to where that customer is in a buying cycle, or whether similar activities were the precursor to churn in other clients. Big data provides the insight to improve the customer experience and, ultimately, results in increased revenue.

“For each use case discussed, the importance of sound planning, quality data and data governance were stressed. Big data brings with it additional complexity and, if approached in an ad hoc way, is likely to fail. Businesses must also address privacy concerns, particularly when applied to big data analytics in the cloud. One approach, implemented by a global exchange, is to allow users to access cloud-based processing power. For example, to do real time fraud analytics on trading data, while storing all sensitive data within the secure internal environment. This hybrid approach balances the need for governance with the cloud’s ability to reduce costs,” says Allemann.

Big data is, in many ways, synonymous with the Hadoop platform. One important lesson is that Hadoop does not replace existing Business Intelligence (BI) and data warehousing solutions. However, where traditional Enterprise Data Warehouse (EDW) solutions are highly structured and difficult to adapt to new data sources, Hadoop provides the flexibility to rapidly integrate data from multiple sources – both structured and unstructured. “This is the basis of a use case described by a global bank – to optimise their data warehouse through the use of a Hadoop landing area that reduces both the time and cost of data integration, and also allows them to cost effectively store much more data,” he adds.

“The bank stressed the importance of commercial support for their Hadoop environment: Hadoop’s open source nature carries risk. The framework is evolving and has a number of competing options – some of which may not survive. It is also technically complex, requiring an investment in JAVA, R, Python and other specialist development environments.”
Self Service Big Data analytics

The real value of big data analytics can be best realised through the use of a self-service big data platform, such as Datameer. Datameer provides a single, intuitive application for the full analytics life cycle – combining data integration, data management and self-service analytics on top of Hadoop

“Self-service big data platforms, such as Datameer, can shield your organisation from much of this technical complexity, proving the governance and management framework to exploit big data in a fraction of the time and effort required using the pure Hadoop stack,” concludes Allemann.

Sign up for a free trial of Datameer at http://www.datameer.com/datameer-trial.html

Countering the cost of compliance

By admin, 13 maja, 2014, No Comment

By Emily Wojcik, Senior Product Marketing Manager – Information Management, CommVault

Controlling compliance costs continues to be one of the top priorities for businesses today. The ultimate challenge is how to keep the right data in the right place at the right time. Unfortunately, legacy solutions often only serve to turn archives into permanent storage pools and certainly don’t address long-term retention cost-effectively. It’s no surprise then that discussions in the boardroom are focused on how to balance long-term retention with bottom line economics. How much of what the business is paying to store has real business value? Could the information that is being kept pose a potential risk? What do you know about your data and, more importantly, what don’t you know? Can you be confident that the right content can be found on demand? How much budget is being set aside annually in long-term vaulting costs and how is this managed?

The added complication in the era of Big Data is that many companies are erring too heavily on the side of caution. With regulations and requirements changing year on year, they are now at risk of saving too much information and in effect becoming data hoarders. The bigger issue here is that costs and risks escalate when businesses blindly save everything. Growing file shares, emails, desktops and laptops mean that there is a never ending source of new data to manage, but does the old data rapidly become redundant, outdated and stale or does it remain just as business critical as the day it was generated? It comes down to the level of understanding, intelligence and knowledge about which data is being stored, where and why. This need for information is leading many companies to adopt a “keep everything” strategy in order to play it safe but, according to a survey at the Compliance, Governance and Oversight Counsel, up to 69 percent of retained data has no value to the enterprise. Not only is this content taking up valuable and expensive storage space, but it can also become a liability if not properly managed.

The demand for heightened compliance – whether for regulatory, corporate, legal or even security reasons – clearly means that organisations need to think far more strategically about what they pay to keep, as well as where and how they keep it. Using legacy methods to keep everything is no longer acceptable or practical.

There is, however, a growing realisation that the solution to this headache is content-based retention because it helps control the cost, risk and complexity of managing and retaining compliant data. In other words, it enables companies to gain a much clearer understanding about what data has governance, evidentiary or other business value and therefore makes retention policies about what data to keep, why and for how long far easier to define and enforce. It also means that once retention policies are agreed, data can be automatically classified and organised according to business value. This eliminates time consuming and error-prone manual processes and by putting data collection and long-term retention on automatic pilot, it also means that data is more effectively managed throughout its lifecycle. There are content-based retention solutions out there that can index and systematically move only relevant data to the most cost-effective storage and defensibly delete everything that is irrelevant to the business. This ultimately increases the likelihood that significant reductions in long-term storage costs can be achieved overall which obviously reduces the cost of compliance.

However, perhaps the most important point to note is that combining content-based retention with a storage-agnostic, centralised virtual repository as well as enterprise wide search and self-service access also reduces risk to the business because everyone, including legal and compliance teams, can quickly find the right information when the clock is ticking. Intelligent indexing of both backup and archive data simplifies information retrieval, even if it is stored in the cloud.

Companies are beginning to understand that any mistakes in handing over emails and documents during court proceedings and inadequate implementation of legal/hold preservation can lead to hefty fines, sanctions and brand degradation but yet, according to Gartner1, 62 percent of companies are still not using tools to retain and understand data. It understandably says that “Challenges exist and may seem insurmountable at times. However, the results of good data management will support an agile business that understands its data and empowers its businesses to use it.”

Proponents of content-based retention naturally promote its ability to:

  • Reduce long-term retention storage costs and capacity needs by up to 70 percent by eliminating data that has no value
  • Streamline information lifecycle management with automated policies to classify, organize, retain and delete information
  • Enable efficient online archives with seamless cloud storage integration for faster business access & insight
  • Reduce the burden on end-users for classification
  • Simplify discovery, legal holds and review for file and email data to reduce risk

However, organisations that would like to counter the cost of compliance would be wise to consider the following three points if they want to deliver greater insight and increased value to the business.

1. Compliance changes and there is no crystal ball. It’s safe to say that new regulations will appear, existing ones will change and a few will go away so if you want to build an agile and responsive business, consider leveraging a scalable and adaptable technology to keep pace with the business as it evolves.
2. Don’t blindly “save everything” without considering what value lies within the data. It can be a problem from both a cost and risk perspective. A content based approach to information management will allow you to manage data intelligently, automatically and cost-effectively throughout its lifecycle.
3. Listen to the advice of Industry experts. Gartner suggests that “When possible, seek solutions that leverage a common infrastructure.” A single data repository and a central deletion point will provide a more cost-effective, risk adverse, defensible compliance solution.

1. Does Integrated Backup and Archiving Make Sense (Gartner, 2013)

 

 

 

Riverbed renames products to more clearly describe what they do and convey their value

By admin, 13 maja, 2014, No Comment

New Brand Names and Focus on Platform Further Position Riverbed as the Leader in Application Performance Infrastructure, an $11 Billion Market Opportunity

Riverbed Technology, the leader in application performance infrastructure, today introduced new product names that better describe what each product does and reflect how together they deliver unique strategic value as part of an integrated solution, the Riverbed® Application Performance Platform™. The consistent naming framework reinforces how Riverbed solutions work together to deliver optimal application and data performance across global infrastructures.

The product renaming is the next step in Riverbed’s evolution and expansion into a multi-product company that offers an integrated application performance platform, a growth strategy that has been in the works for several years and unveiled by Riverbed in November 2013. Riverbed’s strategy is aimed at fuelling its growth in the application performance infrastructure, an $11 billion market opportunity growing at about a 5% compounded annual growth rate (CAGR).

“Riverbed launched our first product 10 years ago with our Steelhead WAN Optimization solution, and over the last four years, we’ve made the journey from a single product company to something much bigger. Today, we offer the industry’s most complete application performance platform to allow customers to host applications and data in whatever location makes the most business sense, regardless of distance or location, while ensuring the flawless delivery of applications and data. We call this concept location-independent computing and it has been the idea behind the solutions we’ve brought to market for years,” said Kate Hutchison, CMO, Riverbed. “We’re now updating our product brand names to better communicate what our products do and reinforce how they are part of an integrated platform that ensures applications perform as expected and data is always available when needed.”

The Riverbed Application Performance Platform, which includes five product families as well as open APIs, developer tools and professional services, is the only complete solution in the market that optimises application and data delivery and provides the visibility to detect and fix performance issues before end users notice a problem. With this Platform, companies can host applications and data in the locations that best serve the business – in data centres, in branch offices, in the cloud or any combination thereof – while ensuring flawless application delivery, better leveraging global resources, radically reducing the cost of running their business and maximizing employee productivity.

The new product family names build off Riverbed’s flagship product, Steelhead. Each begins with a word that signifies the strength of infrastructure and celebrates Riverbed’s heritage, Steel:

  • Riverbed® SteelHead™(formerly Steelhead) – the #1 WAN optimisation solution that delivers applications and data at the fastest speeds across the optimal networks for the lowest cost.
  • Riverbed® SteelFusion™(formerly Granite) – the only branch converged infrastructure that delivers local performance, data centralisation, instant recovery and lower TCO.
  • Riverbed® SteelApp™(formerly Stingray)– the #1 virtual application delivery controller(ADC) for scalable, secure and elastic delivery of enterprise, cloud and e-commerce applications.
  • Riverbed® SteelStore™(formerly Whitewater) – the industry’s most scalable cloud storage appliance that reduces data protection costs by up to 80%, integrates seamlessly into existing backup infrastructure and eliminates tape backup.
  • Riverbed® SteelCentral™(formerly OPNET, Cascade and NEOP) – the only performance management suite that combines user experience, application and network performance management to provide the visibility needed to diagnose and cure issues before end users notice a problem, call the help desk to complain or jump to another web site out of frustration.
  • Riverbed® SteelScript™(formerly FlyScript) – open APIs and developer tools to customise and automate application performance infrastructure from Riverbed.

These products all come together in the Riverbed Application Performance Platform™, as shown in the diagram below.

Riverbed Application Performance Platform

Riverbed Application Performance Platform

SEACOM deploys Infinera Intelligent Transport Network in Africa

By admin, 12 maja, 2014, No Comment

Infinera (NASDAQ: INFN), provider of Intelligent Transport Networks, and SEACOM announced today the deployment of the Infinera DTN-X platform across SEACOM’s terrestrial network providing services to the African continent. The Infinera Intelligent Transport Network, featuring the DTN-X packet optical transport networking platform, enables SEACOM to differentiate its services and manage costs as it scales network capabilities.

SEACOM owns and operates a high-capacity international network consisting of multiple submarine cable systems connecting Africa to Europe, Asia and the Middle East. SEACOM’s submarine and terrestrial networks, stretching across 17,000 km, enable connectivity solutions that give carriers, network operators, and service providers the ability to expand and grow their operations across Africa and beyond.

Infinera provides SEACOM with an Intelligent Transport Network featuring the industry’s only commercially available single-card 500 gigabit per second (Gb/s) FlexCoherent super-channel solution, which is based on Infinera’s widely deployed photonic integrated circuits. These 500Gb/s super-channels enable SEACOM to scale to terabits of transmission capacity, integrating DWDM optical transmission in a single platform that is capable of supporting up to 12 terabits per second of non-blocking OTN switching as their traffic requirements grow in the future. Infinera’s intelligent software, combined with this converged platform, automates network operations to reduce both operational cost and service delivery times. The Infinera DTN-X is designed to scale without compromise and to enable future upgrades to single-card terabit super-channels.

“The need for bandwidth in Africa is growing substantially and SEACOM’s network is ready to address this increase in demand for our customers,” said Claes Segelberg, CTO at SEACOM. “The Infinera Intelligent Transport Network enables SEACOM to offer 100 Gb/s services and provides us with the ability to offer higher speed services to accommodate existing customers and attract new customers.”

“SEACOM’s deployment of an Infinera Intelligent Transport Network solution substantially increases the capacity available for Internet and global communications services as the company addresses the growing need for bandwidth in Africa,” said Chris Champion, senior vice president, EMEA sales. “We are honored to work with SEACOM to deploy the DTN-X platform. This network upgrade continues our longstanding relationship with SEACOM and reflects positively on the success of SEACOM’s prior deployment of the Infinera DTN.”

1Stream introduces WebRTC browser-based video and voice calling

By admin, 12 maja, 2014, No Comment

Hosted call centre technology provider 1Stream says the advent of Web Real-Time Communication (WebRTC) is a disruptive technology that will make the rollout of new call centre services even faster — and provide easy new ways for customers to call in.

“This is one of the most exciting technical innovations we’ve seen in a long time,” says Director Jed Hewson. “WebRTC is a new standard in HTML5 that makes it possible to send and receive voice and video calls directly from a browser — no extra software or plugins needed. It’s cutting edge technology that 1Stream will be first to deploy in South Africa.”

For call centres, Hewson says WebRTC will do away with the need for desktop phones or softphones. “The phone has always been a troublesome part of setting up a call centre – softphones are finicky, and having hardware sitting on your desktop is expensive as well as difficult to integrate with the software. Now all you need is a browser.”

“We can now set up a new agent in minutes,” says Hewson. “And we can do it on any operating system — they’re no longer restricted to Windows. This is going to reduce costs and make it a lot easier, especially for call centres that use home-based agents.”

Browser-based phones also mean the end of installed upgrades, adds Hewson. “Even softphones have to be updated to new versions periodically, and that often introduces problems. With a phone built right into a website, there’s no need to upgrade anything on the agent’s computer.”

For consumers, Hewson says WebRTC means they’ll be able to call into a contact centre directly from their web browsers on a desktop computer, tablet or phone, just by clicking a single button. “From a customer service point of view it’s ultimate convenience.”

1Stream’s WebRTC service will be available to clients before the end of June, says Hewson. “We’re very excited to be able to introduce this to our clients. It brings a level of flexibility and accessibility to call centre operations that’s never been possible before.

Data Centre power quality and assurance – key considerations for financial institutions

By admin, 12 maja, 2014, No Comment

By Marco da Silva, MD of Jasco Power Solutions

If the data centre of a financial service provider goes down due to a power glitch, the impact is significant – there’s potential damage to equipment, applications and systems. However, the greater impact includes downtime, business loss and damage to customer relationships. Speed and cost of recovery, and limiting damage sustained, will depend on the power management system in place. Clean and constant power is critical. To get it right a number of issues must be taken into consideration.

A data centre typically houses the ICT infrastructure of a business. It is the nerve centre of the business where multiple cabinets of ICT equipment enable communication, transactions and myriad other business processes, and precious business data is stored and retrieved. A power outage, poor or dirty power supply, or a power anomaly such as a surge or lightning strike, can wreak havoc with sensitive systems. Protection is vital, but so is ensuring suitable recovery and business continuity.

Power quality and assurance – the practical considerations

Much will depend on the risk appetite of the business, i.e., how much data can they afford to lose or is instant failover (100 percent up time) required. Depending on the challenges faced in the environment and the sensitivity of equipment, a number of technologies can be applied to drive power quality, from voltage stabilisation, galvanic isolation of the power supply to transient absorption and harmonic filtration.

Practical considerations must also be factored in. Besides system downtime, loss of power may mean cooling equipment is switched off, putting ICT equipment at risk of overheating or suffering damage as a result of rising humidity. The result may be loss of system performance or operational integrity, resulting in unacceptable times to recovery. In addition, when power is restored and systems start up, there is generally a power spike to deal with.

While a branded off-the-shelf solution, such as a UPS, can provide a part of the answer, a customised solution that meets the requirements of a specific environment will deliver better odds. A tailored solution will address failover in terms of speed and capacity, but also ensure power quality. Such a solution may include installation of voltage stabilisers, UPS’, inverters, surge and lightning protection, transformers and/or power supplies. It will certainly require careful assessment of loads, identification of mission critical systems and sensitive equipment, testing and refining of power quality, and ensuring suitable cooling and distribution of power in terms of the feed to the UPS. Heating, ventilation and air conditioning equipment is usually powered separately given the huge inrush current on this equipment. These loads are typically run of a generator.

Smaller, modular, hot swappable

In terms of advancements in system design, power management solutions are getting smaller and modular solutions are providing additional benefits. Space is at a premium in data centres and the traditional requirement for failover specifies replication of a UPS or power management system (N+1). New solutions that allow single modular scalable systems for failover within the same frame, minimise the footprint of solutions.

Modular system designs make almost any configuration possible, while the hot swapability of components within these systems drive mean time to repair, replacement, and ultimately recovery and failover. Good advice is to ensure the solution provider holds sufficient stock to ensure immediate replacement of key components.

Other considerations are energy efficiency, the addition of environmental monitoring to the solution and use of embedded Web servers which will allow the power management solution to be accessed from a remote location. The latter also facilitates monitoring of equipment by the service provider if desired. Maintenance of these systems is crucial. A rule of thumb is to service UPS’ twice a year, testing for electrical or battery integrity.

Depth of experience, engineering capability – a winning combination

While there are a number of proven and recognised UPS brands available to select from, the majority are only available in standard configurations. They cannot be customised. Resellers of these solutions are also usually not capable of putting together or advising on the design of complex, multi-component tailored solutions. A solution provider with the right experience and capabilities will be able to identify challenges and potential complexities and develop a suitable solution.

In selecting a provider, identify companies that have strong engineering capabilities, broad experience and a solid track record in designing and building bespoke solutions. For quality assurance, ensure that a manufactured solution is accompanied by an ISO 9001/2008 rating.

 

Your e-toll headache will not abate Mr Mantashe -OUTA

By admin, 12 maja, 2014, No Comment

The Opposition to Urban Tolling Alliance (OUTA) notes with interest the dismissive comments made by the ANC’s Mr. Gwede Mantashe’s about the impact of e-tolls on the elections, wherein he states the people of Gauteng ‘must stop whinging and pay up’.

“The eleven percentage point drop off in overall electoral support for the ANC in Gauteng translates into a substantive decline in the traditional support base of the ANC since the 2009 elections. As much as the ANC Executive try to downplay the decline, it is massive and is largely attributed to the decision to force e-tolls on an unwilling and angry public who wont pay for something they were not adequately consulted on or that they didn’t ask for” says OUTA Spokesperson John Clarke. He adds that “while the Nkandla issue has been a national one, e-Tolls has been concentrated to the province of Gauteng, which is where the biggest haemorrhage of ANC support took place.”

The ANC provincial chair Paul Mashatile recently said that some ‘honest introspection’ was needed to identify why the Gauteng ANC has lost votes. “When the new Gauteng Provincial legislature convenes the ANC will have seven less members than before, with their 40 MPL’s facing a combined total of 33 opposition members from both the left and the right, all of whome being opposed to e-tolls. Mr Mantashe is not serving the interests of the ANC by playing along with Sanral’s ambitions on e-tolls” Clarke said.

Wayne Duvenage, the Chairperson of OUTA, adds that, “The Gauteng based ANC Leadership would be wise to embarking on a path of meaningful engagement with stakeholders on the e-toll issue as soon as possible, before it gets any messier. If they want the truth on how the Gauteng public feel about e-tolls, they should consider calling a referendum, or take a serious look at the myriad of polls and discussions on the matter. With well over a million freeway users in Gauteng defying the system, combined with the serious administrative problems, extremely high costs and gross inefficiencies, the e-toll decision was always flawed as a result of poor research, weak data and an arrogant attitude employed by Sanral to convince the authorities to proceed. It’s certainly not going to get better.”

The commoditisation of software infrastructure and the impact on business

By admin, 12 maja, 2014, No Comment

Software development has become a platform for driving business operations. In a world filled with things such as mobile apps, cloud computing, and virtualisation, companies can no longer afford not to have ICT integration specialists on board. Yet, the commoditisation of software infrastructure brings its own challenges, says Malcolm Rabson, MD of Dariel Solutions.

“The consumerisation of technology has played a big role in this,” says Rabson. “Software is no longer something that is only driven by business strategy. Instead, it is also shaped by the need for more user-friendly and flexible solutions that fulfil a variety of needs. As with many other pieces of enterprise technology, software has become more mainstream. This has made it more accessible to a wider audience who, in turn, are putting companies under pressure to implement components of it inside the organisation.”

Software developer John Walker wrote in his blog in 2007 that the proliferation of coders and more code distributed throughout the internet, has seen the emergence of low or no-cost programs, libraries, and other bits of code. Other developers and IT professionals can latch onto these, either as is or stuffed into a complex composite application. He argues that this should not be viewed in the traditional definition of commodity but instead drive by a new way of thinking around the reduction in price and the amount of different implementation opportunities for solutions.

But even taking into account a more conservative look at commoditisation as an act of making a process, good or service easy to obtain by making it as uniform, plentiful, and affordable as possible, one can see the direction the business landscape is taking.

“Irrespective of having a consumer or business focus, software has become significantly more flexible than the bloated code of the past,” adds Rabson. “A few years ago, hardware costs were high and the unwieldy software solutions were designed to put those systems through their paces whether it was needed or not.”

However, the mobile landscape of today with smartphones, tablets, and (to a lesser extent) notebooks, necessitate solutions that are light, easy to use, and the means to an end. Consumers and business users no longer care for all the intricacies involved in solutions but instead require things to do what they are supposed to.

We have moved beyond the ‘bits and bytes’ generation and into one that operates in real-time to get the most advantage out of the systems and software used. No longer is the software industry dominated by three or four multinational organisations. Today, virtually any person can quickly drag and drop code and develop an app to fulfil a need.

“By just how much companies are willing to change to accommodate for this, only time will tell,” concludes Rabson.

 

Johnson Controls Metasys upgrade drives improved operations energy savings at NMMU

By admin, 12 maja, 2014, No Comment

Nelson Mandela Metropolitan University (NMMU) is a South African higher education institution located along the country’s south eastern coastline in the city of Port Elizabeth. It offers vocational training and professional degrees up to doctoral level, has six campuses and approximately 27,000 students. The main NMMU campus is built on an 720-hectare nature reserve. The South, North and Second Avenue campuses are all situated close to the beach. The Bird Street Campus is in the Central Business District of Port Elizabeth, while the Missionvale Campus is 20km away from the Main Campus. The George Campus is 300km further west along the coast.

High demand for remote multi-campus management, energy management

The former University of Port Elizabeth installed a Johnson Controls Metasys building management system (BMS) in the 1970’s. In 2005, NMMU was formed through the merging of three institutions: the University of Port Elizabeth, the Port Elizabeth Technikon and Vista University. After the merger, NMMU found itself challenged to adequately manage an additional 140,000m2 of geographically remote administrative, lecture, research, and residential facilities so in 2006, the Metasys BMS was expanded to incorporate the additional sites. In 2008, spurred by an energy crises in the country which has seen energy costs continue to escalate sharply, the need to incorporate energy management across all campuses became a priority. Upgrading of the entire BMS took place in 2012, with improved electrical metering, full remote management, energy management and detailed reporting.

“We needed a BMS solution that would assist us to manage a multi-campus facility. It had to be simple and easy to manage, and it had to go beyond alarm response and managing building comfort levels to incorporate energy management.” says Peter Peters of Technical Services at NMMU.

NMMU’s Facilities Management Department worked with ITD Airconditioning, an approved Johnson Controls Building Controls Specialist (ABCS), to upgrade the entire system to the latest Web-based Extended Application and Data (ADX) system and to install new Network Automation Engine (NAE) controllers. The upgrade took place over eight months, with the hardware change followed by the installation of new software onto the NMMU servers and the development of more than 130 graphic pages. The Johnson Controls Energy Essentials reporting package was also installed for enhanced and automated electrical consumption reports.

Driving operations, getting results

The BMS is used in the day-to-day operations of the university. It monitors and controls electrical consumption; street and walkway lighting; heating ventilation and air conditioning equipment; hot water generation at residences; fire alarms; critical equipment alarms; and climate control chambers used in research areas and laboratories. The Johnson Controls Metasys BMS is also an important element of the energy management programme implemented by NMMU. With more electrical meters directly connected to the BMS, consumption can be measured more accurately, maximum demand usage monitored and load shedding implemented to meet maximum demand targets – all in real-time. The ability to monitor the overall and sub-metered energy consumption of each individual building has proven an important element of the success of the energy management programme.

To date, approximately 50 of the identified 100 metering points have been installed. The target date for completion is 2016. Nonetheless, over the last six months approximately 10 percent has been cut from NMMU’s energy bill. This is over and above the fact that energy consumption increased in the same period, with approximately 11,000m2 of new buildings having been constructed and student numbers having continued to climb. The actual savings made are therefore much larger.

As more meters are installed and the university’s load shedding abilities increase, the university hopes to achieve greater savings, supported by green campus initiatives offering students incentives to achieve targets.

An energy management committee has been formed, with ITD Airconditioning assisting with implementing identified initiatives, continually refining targets and the responsiveness of the system.

NMMU will achieve full return on its investment in the Metasys BMS upgrade in one year, measured in energy savings alone. What is more vital for the university, however, is the additional control and accompanying savings in time and manpower that the remote monitoring and management capability of the BMS provides. Says Peters: “Building management systems directly impact multiple aspects of NMMUs operations, from creating a comfortable environment for teaching and learning, to ensuring control of sensitive environments for research purposes, managing energy use in residences, driving security and, an increasing important factor, managing costs. The Metasys BMS is an invaluable tool— without it we would not be able to effectively manage the buildings on the university’s multiple campuses.”

 

wielkoformatowe wydruki Lublin kolorowe kopiarki Warszawa wielkoformatowe skanowanie kolorowe skanowanie Lublin skanowanie w lublinie skanowanie lublin kserokopiarki warszawa wydruki Lublin drukowanie projektów w lublinie