I wanted to break-up a single 500GB MDF file into 5 files of around 100GB each, so created 4 NDF files.
I set autogrowth to 1024 MB for the NDF files and OFF for the MDF file.
In a SQL Job I used code like …
DBCC SHRINKFILE (N'MDFLogicalFileName', EMPTYFILE);
Which after the second weekend left about 88GB in the MDF file.
--datafiles.sql select Physical_Name, ROUND(CAST((size) AS FLOAT)/128,2) Size_MB, ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS FLOAT)/128,2) Used_MB, convert(int, ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS float)/128,2) / ROUND(CAST((size) AS float)/128,2) * 100) Used_Pct FROM sys.database_files where physical_name not like '%ldf%' order by physical_name;
I cancelled and deleted the job.
Over the next 3 week nights I reduced its physical size to 300GB, 200GB, then 100GB using a Job step like …
DBCC SHRINKFILE (N'MDFLogicalFileName', 100000);
I set MDF autogrowth to match the NDF files, so the five would naturally balance (size wise) over time.
Lastly I set up a nightly job to rebuild the most fragmented indexes (Thanks again Ola).