Question regarding the proposed pricing: from what I’ve read, pricing is based on row writes/reads r

Question regarding the proposed pricing: from what I’ve read, pricing is based on row writes/reads regardless of row size. This penalizes normalization then right? I have a schema that normalizes metadata for Dicom files, which is usually many small key value pairs that are very redundant. In other words, storing the denormalized metadata would waste a lot of storage, but normalizing means I have many small rows which then costs more. For example, a typical request might load metadata for 500 files, each file with 100 data items. So that’s at least 50,000 read units for one request right? Or I could stuff all the metadata for each file into one row, where maybe 60-80% of the data is identical. Considering there is a ceiling on storage with D1 I’m not totally sure where to draw the line here.
Was this page helpful?