Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Failing Disk Drives

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Failing Disk Drives

    It has been posted on the Midrange cover page with some details, and i've had 3, yes 3! 6717 10K drives go in 3 months.

    We had 1 go 2 weeks ago, it was less than 6 months old. I was surprised by the failure, but to me it's a non-event. It was hot-swapped and we also noticed no degredation in performance. Bill


  • #2
    Failing Disk Drives

    The disk drive story is here: http://www.midrangecomputing.com/mcn...ws.cfm?mcn=542

    Comment


    • #3
      Failing Disk Drives

      I'm assuming you've got Raid5 config? When i've had mine replaced, the engineers did the replace in dedicated mode. luke warm swap at best. When i asked what happened to the hot swap, he said they had known of instances were it didnt work!! and consequently trashed everything. Bet that wasnt advertised!! So they much prefer to do it in cold swap. Good ol' IBM technology eh. I've also been told that there is supposed to be more microcode coming out soon. I'll leave some msg's when get them. Ian.

      Comment


      • #4
        Failing Disk Drives

        Fresh from MidrangeComputing: An IBM source says that the company is getting ready to release a new DASD Availability Fixpack for OS/400 V4R4, V4R5, and V5R1. According to this source, IBM has made some changes in the OS/400 and disk drive microcode associated with 10K RPM disk drives that will change some hard errors to predictive errors. This will not affect IBM's having to replace a faulty disk drive, but apparently it will prevent a disk array with RAID 5 protection from dropping out before a service--meaning a drive replacement--can be scheduled. In many cases, this change in the microcode will allow for proactive detection of a failing drive through Service Director, which will enable IBM to get in there and replace the drive before it fails. Like its customers, IBM wants to avoid having more than one disk fail before it can be replaced and its data rebuilt from the parity data stored on the other disks in a RAID set. If two disks fail in a RAID set, an AS/400 or iSeries server crashes, and the system has to be rebuilt from scratch. This is obviously bad. It is unclear when the DASD Availability Fixpack will be updated, but my sources expect it to be soon. The new DASD Availability Fixpack will be available for OS/400 V4R4 under PTF number MF27063, for OS/400 V4R5 under PTF number MF27053, and for OS/400 V5R1 under PTF number MF27051. Keep checking IBM's iSeries and AS/400 tech support site for these updated fixpacks.  

        Comment


        • #5
          Failing Disk Drives

          We have an 820 that was brought on line in March of this year. since then, we have had 9 drives fail. 17.54GB 10K. So far, none of the PTF's work, and our CE will admit that. We opened a formal complaint with IBM about it, and are waiting to hear. It has something to do with the controllers now supporting 10 drives instead of 8. All of ours but 1 have been logically #9 or #10 from the controller.

          Comment


          • #6
            Failing Disk Drives

            How Many people out there are having serious problems with Newish disk drives from IBM failing? It has been posted on the Midrange cover page with some details, and i've had 3, yes 3! 6717 10K drives go in 3 months. I'm actually one of the lucky ones as well.Its not being user for production yet. This is the machine that is supposed to be ultra-stable, ultra-reliable. Thats why we pay so much for it isnt it?? The NT based IT manager here is not amused, or very impressed! Of all things to stuff-up. Apparently there working on it. I'd like to think the're working 24x7!!! Any others with similar problems? Ian.

            Comment


            • #7
              Failing Disk Drives

              It just keeps going downhill.... Ralph

              Comment

              Working...
              X