Newbie with Postgres, and want to see if I'm doing this correctly before I write a bunch of useless Functions. I have a spreadsheet that I will be getting once a year, with approx. 50,000 rows on it. I need to import it into a database and there will be a lot of duplicate records for many of the tables. Since it only happens once a year, I don't care how long it takes to import, but I want to see if I'm doing this efficiently.
Here's an example:
TABLE: x(x_rowid, x_code, x_name) // where my x_rowid is my IPK
INIQUE INDEX: ON x(x_code, x_name)
Data: ('FOO', 'BAR')
So during the import, the 2nd occurrence of this data stopped the transaction because of the error (I think that's what happened)
So I wrote a pl/sql Function
SELECT INTO my_rowid x_rowid FROM x WHERE x_code = 'FOO' AND x_name = 'BAR';
IF found = FALSE then
INSERT statement...;
END IF;
So my question is, is this the right thing to be doing? It looks like it works, but maybe there's a better way.
TIA
Jym
Here's an example:
TABLE: x(x_rowid, x_code, x_name) // where my x_rowid is my IPK
INIQUE INDEX: ON x(x_code, x_name)
Data: ('FOO', 'BAR')
So during the import, the 2nd occurrence of this data stopped the transaction because of the error (I think that's what happened)
So I wrote a pl/sql Function
SELECT INTO my_rowid x_rowid FROM x WHERE x_code = 'FOO' AND x_name = 'BAR';
IF found = FALSE then
INSERT statement...;
END IF;
So my question is, is this the right thing to be doing? It looks like it works, but maybe there's a better way.
TIA
Jym