Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Database access speed w/ multiple logicals

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Database access speed w/ multiple logicals

    I would expect at least a small savings in the time it takes to validate the data being entered against all of the logical files. To gain further reduction in maintenance time, pay close attention to the logical files. 1) Take advantage of shared access paths. For example if you have two indexes, A has keys of Company, Plant, Product Line, and Part. B has keys of Company, Plant, and Product Line. If you build logical file A then B, they may be able to share the same access path, reducing overhead on maintaining the indexes. If you build B then A, two indexes will be created, increasing the effort required to maintain them. 2) Consider access path maintenance. If some of those logical files are used infrequently, consider setting their access path maintenance to *REBUILD to reduce the daily overhead in maintaining them. This is especially valuable if the logicals are built over very volatile files, or key fields. HTH, Kevin

  • #2
    Database access speed w/ multiple logicals

    Years ago, the rule of thumb was that if a physical had more than eight logicals, performance would suffer. Today that may not be the case. New indexing algorithms are in place, and creating a vector index can intensely speed up sequential access. The bottom line would be affected by the number of key fields, they type of access, and whether or not the file is periodically reorganized, as well as other facotrs. Dave

    Comment


    • #3
      Database access speed w/ multiple logicals

      Building logicals with select/omit can add to overhead, and it's always been my understanding that access paths built for logicals with select/omit criteria will not be shared. Better to build the access path you need, then use OPNQRYF or program logic to select or omit records.

      Comment


      • #4
        Database access speed w/ multiple logicals

        A couple of finer points to add to the prior comments: - Besides *RBLD you could also consider the use of *DLY. The advantage of this approach is that you can turn the setting on and off for periods around intense insert or delete activity to save on the index maintenance time yet not require a full index rebuild. - If you have a query-intensive environment, you have to be a bit careful about using *RBLD or *DLY as the query optimizer might need an index for a critical query. Over a file this large you could be in for quite a surprise. - The index maintenance cost will certainly be better when the file goes from 43 to 15 million rows. The depth of the index tree structure will be shallower and thus require less disk I/O to add or remove keys. The primary savings will be disk I/O and memory but you will also save a small amount of CPU per index. - While EVIs are a good option in situations where queries are processing 20-70% of the table, you have to be careful to only build them for columns that have a relatively static number of values (for example, don’t build them over a timestamp column that is essentially unique). The good news is that for the right kind of column the cost of the maintenance is smaller than with a binary tree structure. The downside is that they are not usable for queries that do ordering. - Select/Omit logical files introduce several issues. Many databases have been built to use them rather indiscriminately. In many cases they contain over 90% of the underlying rows. These should be changed to use the DYNSLT keyword. This will allow access path sharing to take place. The other major problem with S/O logical files is that they disable SQE (the new query optimizer) by default. Mark

        Comment


        • #5
          Database access speed w/ multiple logicals

          I have a history file with 50 logical files over it and it is currently at 43 million records. If all I do is reduce this to 15 million records will I see a reduction in the time it takes to write a new record to this file and update the logicals? Or will that basically be the same?

          Comment

          Working...
          X