Jump to content
Attila Kovacs

Sql Server Filtered Index

Recommended Posts

Posted (edited)

It's slightly offtopic but I can't just pass over it. Sql Server (from 2008 I think) offers indexes with a "WHERE" clause,

for example if you want to rule out duplicate values but permit duplicate NULL's you can write:

 

CREATE UNIQUE NONCLUSTERED INDEX [YourIndexName]
ON [YourTableName] (YourFieldName)
WHERE [YourFieldName] IS NOT NULL

This is sensational. I should read more whatsnew.txt's.

Edited by Attila Kovacs
  • Like 1

Share this post


Link to post

@Dany Marmur Actually they are right and this also makes sense! I'm happy you come up with FB, good to know even if I'm not using it. Maybe someone reads that.

Share this post


Link to post

It makes a lot of sense. I want to remember that it was always so with multi-segmented indexes. But i am moving away from those.

 

Fun question (a bit OT):

 

Barring legacy schemas and old clients, do you really need multi-segmented indices nowadays?

Share this post


Link to post

You know what? Now I know what will I do if I have some days off. I'll try to port my app to FB as it supports cte too. \o/

Share this post


Link to post
2 minutes ago, Dany Marmur said:

do you really need multi-segmented indices nowadays

Actually you don't need it if you put the data into a separate table, but I was a lazy dog and googled it up. Never ever used before.

Not sure even if I can keep it this way until I finished implementing.

Share this post


Link to post

Well, we have a database table that is partitioned on a key - simply because the amount of data is so staggeringly huge.

Multi segment indexes makes sense if the table is really large, or the index is very complex.

  • Like 1

Share this post


Link to post
3 hours ago, Lars Fosdal said:

Well, we have a database table that is partitioned on a key - simply because the amount of data is so staggeringly huge.

Multi segment indexes makes sense if the table is really large, or the index is very complex.

Interesting! Please indulge... (or should that be divulge..? it's late).

First... what do you mean by "partitioning a table on a key". That is an unknown concept to me.

And why would a multi-segment key (i mean a key made up of more than one field) help when the table is large (record-wise i assume)? Did you hit the BIGINT limit?

Regarding complex keys... what is the expression index feature lacking?

  • Like 1

Share this post


Link to post
Guest
Posted (edited)
56 minutes ago, Dany Marmur said:

First... what do you mean by "partitioning a table on a key". That is an unknown concept to me.

would be some "as" have a "table for just store the keys of relationships". then, you can just load the "table-with-keys (minus memory use)" and later, search it on "table with all data", capture the record and work it?

Then, we have minus data on memory, and, catch the record (with all data) only when need read it!

an "offline" edit and save or show it! like working through internet in offline state (not-aware-edits) , or, in REST = stateless concept!

when it's ready, send to server and GOD save me if you can!

 

hug

Edited by Guest

Share this post


Link to post

@Dany Marmur
Partitioning is what it sounds like.  Like a disk can be partitioned in multiple file systems, a database table can be partitioned into multiple files.

 

You can do this horizontally - divide up by rows - or vertically - divide up by columns.

 

We use a horisontal partition scheme to split the transaction and movement data of each lot created, processed, stored, moved, and delivered in our 40+ warehouses by warehouse. 

That is many thousands of rows per day per site, and regulation says the data must be stored for certain period of time past the article expiration date. It is used for tracking purposes if a bad batch should reach the market. We would know who got it, where it has been, where it was made, and from which tanks the raw material came. Another similar database tracks who delivered to those tanks, but that is a much smaller amount of data.

 

To oversimplify - you can use a partition key to decide how to divide the records in a table between multiple "physical" files and hence multiple indexes.

If a query includes the partition key, the result set would come from one (or more) of the table partitions - giving the secondary keys a much smaller index to deal with.

Partitioning gives you query performance if the partition is well designed. If the result set spans multiple partitions - each of the partition queries can be run in parallel since there is no overlap in the data, and the result set will be a union of those queries.

 

The expression index doesn't really lack anything - partitioning is purely for dealing with massively large tables.

For examples of the principles of partitioning - take a look at https://www.sqlshack.com/database-table-partitioning-sql-server/

 

  • Like 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

√ó