Advertisment

Manage your Storage Better

author-image
PCQ Bureau
New Update

Storage, in a way, is like bandwidth-you never have enough of it. No matter how much storage you provide for your users and applications, it quickly gets exhausted and you need to plan for more. But adding more and more is not a solution. You also have to manage your existing storage better, so that it's utilized more effectively. You must also ensure that your storage equipment performs fast enough to ensure quick data storage and retrieval. Storage management, therefore, is the art of balancing between adding more storage capacity and utilizing the existing one more efficiently.

Advertisment

Storage management starts with data management, with the end objective of ensuring that data critical to your business is available when you need it and at the speed you need to retrieve it. Learn to classify your data according to its relevance, and everything else will fall in place. For example, you might need sales records for the past ten years, but do you need all of them? And if you do, do you want them available for ready access all the time? Of course not. Then why should it eat up your precious hard drive space? Move all the past data to a cheaper medium such as tape and archive it. You can always retrieve it when there's need.

The same goes with other data too, whether it's your e-mail or database. E-mail, for instance, can be quite difficult to manage. For one, it's one of the fastest growing data in any organization. Secondly, more and more official communication is moving over to e-mail, such that it can be used as a legal evidence. In that case, can you afford to allow it to reside only in users' personal mailboxes? The enterprise needs to ensure that it's backed up so that it can be retrieved whenever required. If that's the case, should you allow your users to use official e-mail address for their personal purposes? Many organizations don't allow that. Because if you do, then the amount of e-mail that will need to be backed up will grow by leaps and bounds. The same goes for data residing on your employees' desktops and laptops. You need to sort out business data from personal data on those machines. Capacity planning, therefore, is a major issue in storage management.

All this sounds very simple in theory, but companies spend millions just for proper data management. The entire storage industry today is toiling to bring out solutions that will make this happen and there's still a long way to go before proper storage management can be done. Even interoperability between various storage devices is an issue. Today, you might be backing up all data from your file server to a SCSI tape drive. Tomorrow, you upgrade the file server to a NAS box because you need more capacity. Will this box be able to back up data directly to the SCSI drive, or will you have to buy a new one because the old one isn't compatible with it? Even if it supports the SCSI drives, will the backup software allow such a thing to happen? Move up to a SAN, and the problem becomes even more complex. Will the Host Bus Adapter for your RAID array, for instance, be compatible with the new Fiber Channel switch you plan to buy for your SAN? Storage standards today, still have miles to go before complete interoperability between storage products from different vendors is achieved. Currently, a relatively new category of software known as SRM (Storage Resource Management) tools is trying to tackle this problem of managing multiple storage resources.

Advertisment

These are just some of the issues the organizations face in managing their storage. Lets see how to resolve them. We begin with a live example.

Managing growth: Lessons from CDSL





The Central Depositor Services Limited, as the name suggests, is a depository service promoted by BSE, SBI, Bank of Baroda and Bank of India. It handles demat accounts of investors. Over a period of around five years since its inception, the company has grown multifold in volumes, with investor accounts increasing from 1 lakh to over 9 lakhs. This kind of growth requires an effective storage system. We met company's VP-IT, Pramod Deshpande, in June last year in relation to a story on the top ten IT implementations in India. At that time, they had only two NAS devices from EMC for their storage, and implemented disaster recovery at the Reliance data center. This time we approached him for this story, and got some very good insights on storage management. For instance, we were surprised to hear that the NAS boxes they were using eight months ago have already become obsolete and have been replaced by higher-end products, namely the EMC CX600 storage servers. The older NAS boxes are now being used for testing.

Lesson 1: While CDSL (Central Depositor Services (India) Ltd) probably had no problems replacing them, the lesson for others buying a storage product is to first determine its roadmap.

Advertisment

The storage array has been connected to a Fiber switch from Brocade. It can scale up to 26 TB of storage capacity while CDSL is currently only using 1.2 TB. Reason for having such scalability is that as per the regulations, all data with CDSL has to be maintained for at least ten years. Moreover, queries from SEBI and the capital market can be quite frequent, so it doesn't make sense to store anything on tape. Therefore, all data including past records is kept on the storage array and a data warehouse is maintained, so that data can quickly be retrieved. Another reason for not storing data on tape, according to CDSL, is that the tape technology is changing very rapidly. So if you store something on tape today, three years down the line, there may not be a tape drive that can read from it. This leads to higher maintenance costs for tapes. Moreover, disks are becoming cheaper and their capacities are increasing. Since all transactions happening with CDSL are also kept on this storage, it also helps settle disputes arising from somebody uploading data to CDSL and later denying it.

Note here that the deciding factor for CX600 storage array was not its maximum capacity. Probably by the time CDSL reaches that storage limit, this product would also have become obsolete. Other factors such as higher throughput capabilities, better software support were among the reasons for going for the product.



Lesson 2: Go beyond storage capacity and also consider other factors. One factor could be the throughput that the storage array can deliver. If it can't retrieve data at the pace needed, then all other capabilities are useless.

Advertisment

One major issue being faced by CDSL is that their current data center resides on the 20th floor of the BSE building, which poses several environmental risks as well as administrative/maintenance overheads. For instance, the data center's cooling and humidity levels have to be looked after, and one can't deny possible water flooding from the rooftop. To resolve this issue, the company plans to consolidate its data center with BSE's (Bombay Stock Exchange) by moving it to the first floor of the same building. Due to this resource sharing, a part of the management headaches, like humidity and temperature checking, will be taken care of by BSE. Plus, since CDSL is using BSE's backbone it will reduce one point of failure as there would be no separate connection from BSE to

CDSL.

Source: TheInfoPro; Storage Wave 3, 2004

The shift to a new location may also be happening because the company is growing and, therefore, needs to improve its facilities. Planned issues after the shifting include consolidation of all databases into a central storage system and using the same system for other applications such as

BCP.

Advertisment

Organizing your data





Have you ever tried hunting for that old mail a very important client sent you about a year ago? You need to revive relationships with the client because of a new project, so you must track down all transactions you had with them earlier. But now you can't find the CD you had backed up all your old mail to. Let's take another case. How about the balance sheet for your last two financial years, which you need to re-send to the management because they can't find it on their machines? Chances are that even if they're lying on your local hard drive, it can sometimes take forever just to fetch them. The older the data, the greater is the difficulty in retrieving it. That's the trouble with data. As it grows over the years, we do end up storing it, but often in such a haphazard manner, that even we have trouble finding it. If things are so difficult on a single desktop, imagine what it will be on a network, where hundreds or thousands of users have stored gigabytes or terabytes of data. Throw into this some other factors like legal and industry compliance, different storage, backup and retrieval technologies to choose from-and what you get is a fine soup. Enter the wonderful world of data classification. This isn't as easy to do though, as it sounds.

“The biggest challenge we face today while managing our storage setup is the ever-changing technology and technology upgrades by the storage vendors. For instance, the clear distinction of one technology from another is a problem, such as NAS versus a SAN.”



Edsel Pereira


IT Manager, Glenmark Pharmaceuticals

In order to classify data, the end objective you should keep in mind is that it should be available when you really need it. For that, you need to classify and organize it accordingly. Don't worry about all the technical terms being thrown at you for doing it. Understand the basic needs of your network and then try to see if the solution can help you do it effectively. For that, define a policy for your users and the network for storing the data and ensure that everyone follows it.

Advertisment

Now comes the vital question of how many data classes should you divide your data into. Put in other words: what is the optimal number of data classes that an organization should use? This is something that varies from organization to organization depending on its needs and objectives. Data classification should be as simple as possible, making life easier for everybody. A data class should only be added if the cost of storing and maintaining this particular set of data in a present class comes out to be greater than the cost of creating a new class altogether for this set. The cost here is the cost involved in defining new policies, procedures and maintaining the new class.

Functions of a typical SRM tool

- Storage device discovery, mapping and visualization



- Data collection for real-time as well as historical reporting


- Event management


- Performance monitoring and analysis


- Data management functions for back up, recovery, hierarchical storage management, etc


- Capacity planning and management


- Storage virtualization and provisioning


- Managing storage volumes, tapes, RAID arrays, etc


- Automated policy execution






One common misconception in data classification is that it has to be done on the basis of the age of data. This means that as and when the data gets older, it is pushed back from the server (where it is readily accessible) to an archival kind of storage such as tape drives. This is not always the case. You need to classify data as a function of its availability, cost and access requirements. Data that's required by users on a frequent basis is kept readily available and the rest is kept in archival storage. This way the space on the server is saved and access time is also kept under control.

Advertisment

You might have heard of the buzzword called ILM (Information Lifecycle Management), which promises to be the solution for managing your data from its inception to its final deletion. Notice that by classifying your data and making it available when you need it, you're already fulfilling many requirements of this great technology called

ILM.

Standards in data storage





Data storage is no more a niche area in the digital information age. The economic as well as physical barriers to technologies to store and access data are becoming part of the existing IT infrastructure. As data storage technologies mature and look almost like utility, it requires standards like in any other technology. Also, as demand for lower-cost solutions, upgradation and maintenance is growing, there is increasing pressure to standardize data storage technologies like it happened to Internet technologies in 1990s.

Let us look at the number of storage standards that are under active development. There are many benefits to be gained from the development of standards and from the purchase of products (both hardware and software) that have been built in accordance with standards. For vendors, standards accelerate the time to market for new products. They commoditize products at a lower level, thus, enabling vendors to add value through more feature-rich management functionality, for example. For the end-user, a vendor's proof of conformance to standards increases the level of confidence that there will be some degree of interoperability between products. When products have been built to standards and are subsequently verified for conformance, there is a great likelihood that products will perform as expected in a multi-vendor environment.

This list of standards represents a view of the importance that end-users are currently placing on specific standards. End-users profess interest in IP Storage technologies (such as iSCSI and FCIP). This is an indicator of the rate of adoption for SANs. In the last 5 to 10 years, SANs were thought of as something that only the largest enterprise customers could afford. Based on fiber channel, the perception was that, SANs were complex and expensive. In reality, deployment of fiber-channel SANs crosses markets and companies of all sizes. iSCSI became standard in October 2003 and SMI-S became so in September 2004. This article discusses SMI-S standard more from the end-users' perspective.

What according to you are the critical problem most companies are facing in managing their SAN?

Though the industry has embraced FC SANs and most customers are using high-end storage arrays, some of the problems they're facing include the following.

- The storage arrays are still configured as Direct Attach devices, which means customers are still creating different volumes for different servers and configuring them as RAID 5 or RAID 1. Here the performance of each server gets is restricted to the drives associated with it. Also this approach requires administrators to pre-allocate storage space for individual servers. Thus, there is no flexibility of redeployment, storage still means discrete islands of information, management is required at individual volume basis and most of all, it becomes a big hassle when customers want to upgrade capacity. These issues have been well addressed by the Enterprise Virtual Arrays from HP, which allows pooling of capacity and then allocating on demand.

- The tape library/backup environment is still a different instance of management. The libraries are still managed as JBODs and do not have controller based management interface for end-to-end integration. HP has addressed these issues with the ETLA architecture, which uses specialized controller based functionality within HP Tape libraries for enhanced single pane management. Thus, functionalities like tape masking, Web enabled management, remote service support, library partitioning and are all managed by these cards, which act as controllers within these tape libraries.

- The Multipathing component is different across all storage vendors. This creates a major problem when customers are looking at heterogeneous SANs with storage from different vendors and common servers requiring access to volumes created across different storage vendor boxes. The industry is moving towards supporting native multipathing software.

Today, interoperability between various storage devices has become a major issue. How do you address this issue?



The problem here is not interoperability but more to do with certifications. Since most storage vendors are now using standard

SMI-S specifications on their arrays and because most standards are in place, there is almost no real issue of interoperability. So most combinations should work. However the bigger issue is that storage data is the key to any organization and since most storage arrays are housing critical databases, it becomes very important to ensure that the architecture being deployed by an organization is a 'Certified One'. HP does a lot of certifications on its own and third party equipment to test for compatibility and stability of configurations. HP publishes most of its testing data on external websites. For example, the EBS matrix from HP publishes the complete certification for HP tape libraries with all combinations of server, platform, third party backup software, switches, HBAs etc.

Sanjay Lula, Hewlett Packard

According to IDC, worldwide disk storage will keep increasing at a CAGR of 38.7% till 2005, with enterprises buying slightly more that 1.4 million terabytes in 2005. Computing power is doubling every 18 months while storage capacities are doubling every 12 months. Fortunately, the cost per MB has continued to decrease to support this continuous data storage demand. However, the challenge remains to protect and manage this ever-rising volume. IT resources are not expanding to support this increase of data volume and the value of the data is continuously increasing in terms of the criticality to the business. Data has become one of the most important assets of the business and it is more important than ever to maintain accessibility to this data. So where are we today with networked storage, and what is the industry doing to address this management dilemma?

The state of standards



SNIA or Storage Networking Industry Association formed the Storage Management Initiative to develop an open standard for storage management. We call it an initiative because it isn't just SMI-S-the specification. The specification has been standardization by ANSI INCITS 388-2004, American National Standard for Information Technology-Storage Management and has been submitted for ISO accreditation.

SNIA has always focused on delivering value in a vendor-neutral way and consequently interoperability has been a big driver for the organization. Interoperability is delivered through standardization and through vendor adherence to this.

SNIA is at work on a number of standards apart from SMI-S, including the DDF (Disk Data Format) standard. This standard describes the layout of data in RAID configurations. When RAID vendors conform to the DDF standard, customers will be able to upgrade their internal RAID seamlessly, or upgrade their servers without having to reconfigure their RAID setup. This proposed standard is being taken through the ANSI process now.

Another area of standards development is in the multi-path management API, which will allow client vendors to discover and categorize all multi-path devices, such as switches and HBAs, and identify paths that storage will use. Other elements, such as path prioritization are in the works.

SMI-S is the most important standard to have emerged from the SNIA, or perhaps one of the most important storage standards. SMI-S changes the business

model for management application software development for the good of customers and the industry alike. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products via expensive custom and hard to maintain infrastructures. They will be better able to concentrate on developing features and functions that are valuable to end-users. With one set of object models and one protocol stack, management of network storage should actually become simpler and cheaper than managing an equivalent amount of storage attached directly to servers.

This standard provides a ubiquitous management interface for various devices in a SAN. The premise is that an end-user will be able to buy storage hardware, be it SAN-attached disk, NAS, switches or HBAs from any vendor, and the management application that has been elected to deploy will be able to manage these devices in a common way, without any special tools, kits, or incompatible software programs. Agent proliferation is reduced and it may be that the number of storage management software tools can be reduced. For end-users, the value proposition is significant-end-users will be able to select products (both hardware and software) based on their requirements for their applications, with the confidence that these products will interoperate. Verification of conformance to any standard is important; with this verification of conformance to SMI-S, end-users can deploy storage networks with the confidence that they will be able to reduce management costs. Benefits to the industry are also self-explanatory (accelerated product acceptance and time to market, low development cost, spurring innovation and expanded total market) and in a perfect world, these cost savings will be passed on to end-users.

The next standard that SNIA is working on is for ILM (Information Lifecycle Management). The objective is to establish interoperability amongst ILM solutions and services.

Standards in data storage is contributed by

P K Gupta, Chairman-SNIA India Managing e-mail

The average amount of e-mail that reaches a typical corporate mailbox on a typical day is horrifying to say the least. Mixed with regular official correspondence, you would find spam, scams and personal messages. Over time, your e-mail storage allocations appear pitifully small. However, most of it can be easily prevented by managing incoming e-mail at various entry points rather than allowing all of it to reach the mailboxes. E-mail servers come with spam blockers and filters-these can be used wisely at the server stage to block the spam. Even firewalls have such filters nowadays and can be used to block them from entering into your infrastructure. These two measures themselves can decrease the amount of storage to allocate by a half, besides giving you the added benefit of reclaiming your bandwidth.

In a corporate environment, it may be policy to either have the employees delete the mail from the server after downloading it to the e-mail client on their desktops, or the vice-versa of always having the mail on the server and none on the desktop. Either way, you need space to store it. Managing storage at the desktop level is a little more difficult from the angle that you would need to backup and manage each desktop's capabilities individually. At the server level, the load would be so large that you might need specialized software to help you out. This software would also need to be able to locate that critical email for you, with minimal system overheads.

Recently, we had reviewed the Legato EmailXtender (PCQuest Dec 2004, page 131), which is a solution for such systems. That software works with either Exchange, Lotus Domino and even Sendmail systems. Legato's solution takes care of storage by letting you extend the space used by the mail server on to different systems or even specialized SANs. The EmailXtender also has excellent search capabilities.

Managing databases



Databases are tricky to manage. For one, enterprise database servers like Oracle and MS SQL Server have their own built-in backup and restore systems. So, anything you build on top of it or around it, needs to work well with this system. And, you never know what is going to trigger a stored procedure that will attempt to create a 100-years' historical progress report of your enterprise. Your storage solution needs to be three-fold (live, near-live and offline), but in such a way that information from all three is accessible instantly. It is not just sufficient to say 'move N rows from the bottom of table T to the offline storage' just because it is 10 years old. You would need to study all your applications, understand how they work and the chances that one of them is going to want this data yesterday.



Unlike e-mail and file system management, it is not possible to implement a continuous storage management feature for database systems, as the database would implement database, table, row and column locking.

Some database systems (like MS SQL Server) allow you to specify more than one location to store the file for each database. This is useful, because this allows the back end to intelligently span your data across this storage as per availability and ease of retrieval. So, you could have a small (say 2 to 5 MB) allocated per database on your database server itself and a lot more on one or more NAS boxes. You can then manage each file's growth and fill rates separately from the management console.

A vital but often ignored part of database storage management is its log files. RDBMS generate log files for almost all operations they do. Even when your server is 'idle' (as in no queries are being processed), it is still in operation and performing administrative tasks on your data-checking integrity, rebuilding various indices and so on. Each of these events are logged for future trace-backs and these are filled up in three different locations -the Windows Event Log, the RDBMS' own log file and the transaction logs of that particular database. Yes, these files also need to be spanned, backed up and restored along with your databases, because they are vital to your infrastructure.

Choosing the right SRM tool



SRM or Storage Resource Management is the latest buzzword making rounds and is touted to be the fastest growing segment in the storage sector. SRM tools help you if you have a highly distributed storage setup with lot of storage devices. In such an environment, the biggest challenge is managing so many different devices. That's because, each device, whether it's direct or network attached storage, or even a fiber channel switch, has its own management/operating interface. As a result, each has to be managed individually. This is time consuming and incurs a lot of administrative overhead. SRM tools aim to alleviate this problem by providing a single management console for all storage resources on your network. Two key benefits of this are better utilization of your storage resources and their increased availability and uptime. This is definitely the sort of functionality that every organization would love to have, but beware, as there are several catches to this situation.

If we were to draw a parallel, then storage management tools are like NMS (Network Management Software). They monitor all storage devices on your network, just like NMS monitors all devices on your network. If you've worked with an NMS, then you know how complicated they can get. They can manage everything from nodes to switches and routers, to network printers and even power conditioning equipment such as UPSs. This is possible because most network management software is based on the SNMP standard. Any networking device that supports this protocol (which most do today) can be managed by an NMS. SRM tools, on the other hand, are still toiling with a common standard that all devices should conform to. Therefore, due to the sheer number of storage devices that exist, and the lack of a common standard across, any one SRM tool can't manage all the devices.

Coming back, though most network devices today support SNMP, they still don't allow the NMS to manage them completely. A router or switch for instance, might be able to pass on traffic information to the NMS, but in case you want to configure policies on them, then you'll need the proprietary software that came with it unless of course, the NMS belongs to the same vendor who supplied you the routers (eg Cisco). Same is the case with SRM tools. If you buy SRM tools from the vendor who also supplied the hardware (IBM for instance), then the level of control you'll have over it will be far higher than if you purchase a third party tool.

However, you might have to pay a premium for this.

Before deciding for the SRM tool, determine what you want to manage with it. That's because a wrong choice would mean more complication as the right SRM tools can offer you more functionality than you really need. For instance, if you need to monitor your storage devices that provide you alerts and reports of their status, then ensure this is what you get and nothing less. Most SRM tools are very modular, so you can actually choose the components you really need. Another pitfall to beware of is that many times, vendors use the terms SRM tools and SAN management software interchangeably. Be careful of such marketing gimmicks.

Is consolidation an answer?

Let's take a typical scenario most IT managers are familiar with. Your file server just ran out of disk space, so you have to add another hard drive to it. Just then you realize that your e-mail server has started slowing down because e-mail is piling up and quickly eating up all the storage you allocated to the server. So you end up buying more storage for that server. Naturally, since all this data is precious for your organization, you need to ensure it's backed up regularly. Unfortunately, since the data has grown so much, the existing tape drives aren't sufficient. So you end up buying another faster, higher capacity server. Obviously you can't throw away the tape drive, because the new drive doesn't read tapes from the previous one. Plus, it can still be used for backing up some of the data. Your company is also planning on implementing a business application, for which additional servers need to be purchased, and naturally, data for the same would also require additional storage space. Then, that would also need to be backed up.

The result of all this is that the organization ends up managing lots of discrete islands of storage. Every server has its own separate storage space. And oops, we almost forgot about all the hard drives inside those desktops. You have to ensure that data on those is also safe and backed up properly.

As you can see, it's not a pretty sight, and in fact as per some estimates, managing all this storage costs 5 to 7 times more than the initial cost of the storage hardware itself. You can do the math! Include the cost of management software for managing all these devices, the additional skilled manpower you've recruited, and the overtime you have to pay them for backing up data over weekends and after office hours. Then don't forget to include the maintenance and running costs (tapes, crashed hard drives, downtime, etc).

One possible solution to this is to consolidate all storage into a single pool. This ensures that you just have to manage a single device and its software. Sounds easy? That's what most vendors want you to think and as a result, you would be hearing all the noise about storage consolidation. What we suggest is that you evaluate your options properly jumping into anything. A lot of planning is needed for doing consolidation. One of the biggest challenges in centralizing data into a single storage pool is managing the performance levels. That's because all your servers would now be accessing this single pool for their data. So you have to determine how much throughput would you really need and whether the available solutions can deliver that. If you go the SAN way, and put all your storage devices on a separate fiber channel network, then you're faced with yet another major problem-cost. One for the SAN itself, two for the software to manage it and three for building redundancy into it. You wouldn't want to be left dangling if your single storage pool crashes.

So on one side, there's the cost of managing so many storage devices and on the other is the cost of implementing and maintaining a central pool of storage. Calculate the costs associated with each, and what you will probably end up is a mix of both.

In conclusion, remember that storage management starts with data management. First plan for that and then choose the technologies to make it happen.

Anil Chopra, Ankit Kawatra, Anoop Mangla, Geetaj Channana,Sujay V Sarma

What according to you are the critical problem most companies are facing in managing their SAN?

Though the industry has embraced FC SANs and most customers are using high-end storage arrays, some of the problems they're facing include the following. 



l The storage arrays are still configured as Direct Attach devices, which means customers are still creating different volumes for different servers and configuring them as RAID 5 or RAID 1. Here the performance of each server gets is restricted to the drives associated with it. Also this approach requires administrators to pre-allocate storage space for individual servers. Thus, there is no flexibility of redeployment, storage still means discrete islands of information, management is required at individual volume basis and most of all, it becomes a big hassle when customers want to upgrade capacity. These issues have been well addressed by the Enterprise Virtual Arrays from HP, which allows pooling of capacity and then allocating on demand. 


l The tape library/backup environment is still a different instance of management. The libraries are still managed as JBODs and do not have controller based management interface for end-to-end integration. HP has addressed these issues with the ETLA architecture, which uses specialized controller based functionality within HP Tape libraries for enhanced single pane management. Thus, functionalities like tape masking, Web enabled management, remote service support, library partitioning and are all managed by these cards, which act as controllers within these tape libraries. 


l The Multipathing component is different across all storage vendors. This creates a major problem when customers are looking at heterogeneous SANs with storage from different vendors and common servers requiring access to volumes created across different storage vendor boxes. The industry is moving towards supporting native multipathing software. 





Today, interoperability between various storage devices has become a major issue. How do you address this issue? 


The problem here is not interoperability but more to do with certifications. Since most storage vendors are now using standard SMI-S specifications on their arrays and because most standards are in place, there is almost no real issue of interoperability. So most combinations should work. However the bigger issue is that storage data is the key to any organization and since most storage arrays are housing critical databases, it becomes very important to ensure that the architecture being deployed by an organization is a 'Certified One'. HP does a lot of certifications on its own and third party equipment to test for compatibility and stability of configurations. HP publishes most of its testing data on external websites. For example, the EBS matrix from HP publishes the complete certification for HP tape libraries with all combinations of server, platform, third party backup software, switches, HBAs etc.


Sanjay Lula, 


Hewlett Packard






Advertisment