![]() ![]() Here is an excerpt from the table definitions: TABLE item(ĬREATE INDEX idx_p1_item ON p1 USING btree(item_id)ĬREATE INDEX idx_p2_item ON p2 USING btree(item_id)ĬREATE INDEX idx_p2_p1 ON p2 USING btree(p1_id)ĭanke schön to for pointing EXPLAIN ANALYZE VERBOSE that shed light on a foreign key loop in my definitions. ![]() If I have to choose, the database should be optimized for fast INSERT and SELECT over fast DELETE. Is there any parameter to tune to speed it up? I can also wait for maintenance time to do some things like dropping indexes, deleting rows, rebuilding indexes, as we can lock down the database (but then, what to do?), but I would prefer to be able to do it live, if possible. The deleted items are the old ones, barely accessed (if at all), so I could understand cache miss. I'm don't know much about those costs and their meaning I just see it is using the index, but I can't explain why it takes so long. SELECT on item and parts tables is quite fast ( SELECT * FROM p1 WHERE item_id=? Index Scan using "item_pkey" on "item" (cost=0.42.8.44 rows=1 width=6) (actual time=0.043.0.054 rows=1 loops=1) The items table contains ~500k rows, and each row references between 0 rows on each part table, meaning each part table can contain several million rows.Įach table has a _id_ primary key, and each "child" table has a foreign key pointing to item._id_ (with an index) and UPDATE/DELETE CASCADE so that all parts are deleted when an item is deleted. I have a Postgresql 9.3 with a "mother" table containing items, and a number of "child" tables containing parts of different kinds (e.g. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |