Quantcast
Channel: dBforums – Everything on Databases, Design, Developers and Administrators
Viewing all 13329 articles
Browse latest View live

Looking for a Database Management Tool That Allows Different DB Type Connections

$
0
0
Hi,

I am looking for a database management tool (like Control Center) that will allow remote management of Oracle, SQL Server, DB2 databases all in the same tool ?

Thanks in advance.

Possible to speed up my query?

$
0
0
I could really need some help here. Is there a way to redesign my query in order to improve the performance? When I run it in Excel (MS Query) it takes about 4 minutes and in Crystal Reports XI it takes between 5 minutes and eternity, sometimes it stops responding completely.

I'm working with 3 tables and a stored procedure.

TABLES
ARTICLE
art_id
art_artnr
art_status

Table contains 51000 rows. After my conditions (not all are included in above query) at the end of the query it's reduced to 2000. 64 columns.


ARTICLE_STOCKLOCATION
art_id
lp_stock

Table also contains 51000 rows. 31 columns.


ARTICLE_EXTRA
art_id
ae_string_5

Table contains 17000 rows. 15 columns.


STORED PROCEDURE
Myodbc.SP_Get_Transactions
The stored procedure looks like this:

Code:

ALTER PROCEDURE "Myodbc"."SP_Get_Transactions"(
    IN as_artnr NVARCHAR(16),
    IN al_art_id INTEGER
)

RESULT (
    artnr NVARCHAR(16),
    date DATETIME,
    transtype INTEGER,
    ordered DOUBLE,
    reserved DOUBLE,
    stock DOUBLE
    )

BEGIN

...


QUERY
This is the query that I use. It works but it's slow:
Code:

SELECT
ARTICLE.art_artnr,
Transactions.stock + SUM(Transactions.ordered-Transactions.reserved) OVER (PARTITION BY Transactions.artnr ORDER BY Transactions.date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as available_stock,
Transactions.date,
Transactions.transtype,
Transactions.stock,
Transactions.ordered,
Transactions.reserved

FROM
MyDB.ARTICLE ARTICLE
LEFT OUTER JOIN MyDB.ARTICLE_EXTRA ARTICLE_EXTRA ON ARTICLE_EXTRA.art_id=ARTICLE.art_id
LEFT OUTER JOIN MyDB.ARTICLE_STOCKLOCATION ARTICLE_STOCKLOCATION ON ARTICLE_STOCKLOCATION.art_id=ARTICLE.art_id
CROSS APPLY Myodbc.SP_Get_Transactions(ARTICLE.art_artnr, ARTICLE.art_id) as Transactions

WHERE
ARTICLE.art_artnr IN (
    SELECT
        TRANSX.artnr
    FROM
        Myodbc.SP_Get_Transactions(ARTICLE.art_artnr, ARTICLE.art_id) TRANSX
    WHERE
        TRANSX.date <= CURRENT DATE
        AND TRANSX.transtype NOT IN (2, 3)
    GROUP BY
        TRANSX.artnr
    HAVING
        SUM(TRANSX.reserved) > ARTICLE_STOCKLOCATION.lp_stock
)

AND ARTICLE.art_status BETWEEN 4 AND 6
AND Transactions.date <= CURRENT DATE
AND Transactions.transtype NOT IN (2, 3)
AND (ARTIKEL_EXTRA.ae_string_5 IS NULL OR ARTIKEL_EXTRA.ae_string_5<>'UTGÅTT')


What I'm doing:
My report should show all articles that have a higher demand than the current stock.
Is my query "OK"?
The stored procedure is by nature pretty slow, it contains I think 24 subqueries (1300+ rows of code). But other than that - is there anything fundamentally wrong? I'm not super confident with joins and CROSS APPLY...

Please let me know if you need more information!

InkML (or even just xml) in and out of Mysql

$
0
0
I really don't know where to post this one, so apologies if it's in the wrong place. This is what I'd like to do:

User writes using stylus (or mouse, or finger) onto screen.
When they're finished, they click "Done".
The XML data from the inking (InkML) is saved to mysql using php.

Once I can do this I can think about getting it out of mysql and displaying it again, but this is the first hurdle.

Is it possible? Does anyone know how? I guess I'm really asking how to save any sort of XML data which is generated on the fly (rather than uploading a static XML file) into mysql.

numbers stored as char

$
0
0
Hi all.

I have a db where numeric values are stored as char - numbers can be larger than an integer ( although int8 might work ) - I can't change that.

is it possible to create a query that works with arithmetic on these character records ( maybe using CAST within the query ? )
EG - in basic form :
Code:

select * from <XYZ> where bignum > '1999999'
without producing results for 2,20,200,3,30,300 etc

Actually I think I answered my own question . . . using cast ....

EG
Code:

select cast (bignum as int) from <XYZ> where cast(bignum as int) > '1999999'

function with 3 different parameters

$
0
0
Hello,
In PostgreSQL 9.3 I wrote function to get data with 3 different parameters
Code:

CREATE OR REPLACE FUNCTION pd_getproductprices(p_status type_productstatus, p_sku character varying, p_title character varying) RETURNS SETOF pd_product

    AS $$
    SELECT * FROM pd_product AS p
      WHERE
          ( p.status= p_status OR p_status = '-' ) AND
          ( p.sku like p_sku OR p_sku = '-' )  AND
          ( p.title like p_title OR p_title = '-' )
      ORDER BY p.sale_price asc;
$$
LANGUAGE 'sql';

and calling it I want to set 0 of these parameters or all 3 like:

Code:

select * from pd_getproductprices('A', '%pad%', '%za%')
or only 1
Code:

select * from pd_getproductprices('A', '-', '-')
or without parameters at all
Code:

select * from pd_getproductprices('-', '-', '-')
The reason is that I dislike idea having several noumerous functions for all parameters combinations(actually there would be more parameters and more complex sql syntax).
I added my type :
Code:

CREATE TYPE type_productstatus AS ENUM (
    'A',
    'I',
    'D',
    'P',
    '-'
);

It seems strange that this enum type has '-' value, but I need to make it to set value for my function.

Questions :
1) Is there is a better way with enum type '-' value ?

2) This table has index on fields in this function:
Code:

CREATE INDEX idx_product_status_sku_title ON public.pd_product (status,sku,title);
Would index scanning work the same(not worse) with parameter like
Code:

          ( p.status= p_status OR p_status = '-' ) AND...
?

3) Is there is better way for this ?


Thanks!

DB2: How to detect users who has made changes to database?

$
0
0
Hi,
I have a question about DB2 logging stuff...
Is there possible to detect what user (id, name?) and when has made changes to the database???
I'm more interested in specific operations to specific tables..for example, is it possible to define users who made update SQL queries to some table in database..

Is there any special db2 stuff or utilities that could perform that action?

___________
Best regards,
Tatsiana

DB2 UDB 10.5 client install

$
0
0
Hello,

I have a DB2 ESE AIX server with two versions installed:
9.7 - database1
10.5 - database2

My application is located in another AIX server, it has the DB2 9.7 client already installed.
I need to install the DB2 10.5 client version. it will be installed on /opt/IBM/db2/V10.5 so it can coexist with the 9.7 version.

I need to have user1 accessing both database1 (9.7) and database2 (10.5).

I already have /home/user1/sqllib for 9.7 created by "db2icrt -s client user1".
Can I have user1 accessing database2 (10.5)? In this case should I use another directory? Like /home/user1/sqllib2?

Thank you,
Tony

Novice Access DB user needs help with historical table

$
0
0
Hi all,
Am new to the forum, and need some help fixing a database i created for work. the basic idea is there are a series of tables that are mostly unrelated to help make things a little easier at work. One table tracks calibration dates and breaks it down so at the push of a button it runs a report that shows all items due current/next month/quarter/year, another table tracks status of certain items simply put whether the status is up or down for whatever reason. the two tables i am concerned with are the active and historical tables. so what i am trying to do is in the active table will only be a single record for a specific part number serial number. what i cant get to work is the historical table so every time either a record is added or changed in the active table it copies the only new record or the changes to the historical table to enable a trend analysis of sorts. i tried doing an append query but that took all the active records and appended them to the historical causing duplicates of the same record in the historical table instead of just either the updated record or the new record. that data is input using to forms one for new and one for amendments or updates and upon saving is running an append query from table a to table b but is not doing what i was looking for.for example below this is what is doing.

active table records
a
b
c (change c1)

historical table records
a
a
b
b
c
c1
this is what i want it to do
active table records
a
b
c (change c1)
historical table records
a
b
c
c1

Query Returns Crossjoin Results

$
0
0
Hi all,

Using Access 2016.

The query below returns both pm1=pm2 and pm2=pm1, I just need the former.
How can I limit the results of the query?

Code:

SELECT
    a.ASSET_ID AS pm1_assetid,
    IIf(IsNull([a].[time in]),[a].[work_datetime],[a].[time in]) AS pm1_timein,
    a.[SR #] AS [pm1_sr#], a.work_date AS pm1_workdate,
    b.ASSET_ID AS pm2_assetid,
    IIf(IsNull([b].[time in]),[b].[work_datetime],[b].[time in]) AS pm2_timein,
    b.[SR #] AS [pm2_sr#],
    b.Work_Date AS pm2_workdate,
    [pm1_sr#]<>[pm2_sr#] AS Expr1
FROM
    [tbl pm w asset_id] AS a
INNER JOIN
    [tbl pm w asset_id] AS b
ON
    (a.Work_Date = b.Work_Date)
AND
    (a.ASSET_ID = b.ASSET_ID);

thx

BLOB_Compact

$
0
0
Hi,

I need to compact BLOBs in the table and I saw that is possible with the ALTER TABLE, but does anyone here has an example on how to do it with "compact" option.

Thanks in advance

Error message during "db2adutl grant user ..."

$
0
0
Hello,

during granting user from one server to access the backup images that are available on a different server I get the following error message:

--- ERROR! Database not found in system directory! ---


Error: dsmSetAccess failed with TSM return code 124



The command which caused this message was:

db2adutl grant user db2inst1 on NODENAME PROD for db SAMPLE


We have 2 VMs and 2 TSM Servers:

PROD VM (Database Server)
PROD TSM (here are the prod Backups)

TEST VM (Database Server)
TEST TSM

Now I wish to perform a restore into test with a PROD Backup Image. Therefore I have to access the PROD TSM from the TEST environment.


Can anyone help me?

use bldrtn script with remote database

$
0
0
Hi all,
I'm using DB2 10.5 client on Linux, and I have a server on a remote machine which can be Windows, Linux or AIX.

I want to use bldrtn script to create C++ routines and store them in the server so that they can be invoked by some triggers defined on the remote db.
My doubt is: is this ok for remote servers also?

That is, even if the bldrtn script is invoked on the Linux client, does the compiler run on the remote server machine, so I need to install the proper compiler on Windows/Linux/AIX? Are there limitations/concerns to use bldrtn this way?

Thanks

Stakehost.com - Dedicated IPs- Root Access - 8GB RAM [PayPal, PM & Bitcoin]

$
0
0
Stakehost.com - Secure fast and reliable

Our mission is to provide our customers the best hosting Services. Stakehost Offers the highest quality perfect Money Hosting & Webmoney Shared, Reseller, VPS, Dedicated web hosting and SSL services at the Cheap prices. We make web hosting simple with reliable servers.

================================================== ==============




We Accept:

- Payza(AlertPay)
- Web Money
- Perfect Money
- Skrill(Moneybookers)
- Bitcoin.
- 2CO(Paypal & Credit Card )
- PayPal





DEDICATED SERVER PLANS & PRICES




------------------------------------------------
Basic Server Plan $199/Mo
------------------------------------------------


Dual Cores
8GB RAM
250GB disk space
Dedicated IPs 2
Unlimited bandwidth

ORDER NOW





------------------------------------------------
Advance Server Plan $399/Mo
------------------------------------------------


Quad Core
8GB RAM
2x 1.5TB disk space
Dedicated IPs 2
Unlimited bandwidth

ORDER NOW




------------------------------------------------
Professional Server Plan $500/Mo
------------------------------------------------


Single XEON E5 2603
32GB RAM
2x1.5TB disk space
Dedicated IPs 2
Unlimited bandwidth

ORDER NOW




Why Stakehost?

Low Prices - We have our manage servers excellent performance cheap pricing and 24/7 support.
Daily Account Backups - All cPanel accounts are automatically backed up daily, with the ability to retrieve the latest daily.
Unlimited Features - We offer lots of unlimited features (Domains, Databases, FTP & Emails accounts etc.)


================================================== ==============


If you have any further questions please contact us at sales Create a support ticket


Join the Stakehost

Twitter: http://www.twitter.com/stakehost
Facebook: http://www.facebook.com/stakehost

Just learning about db indexing for existing db

$
0
0
Hello, everyone. I was a software engineer for AT&T Bell Labs for 15 years, doing fault recovery programming in C. Recently, I became involved in a genealogy volunteer project that among other things involves a database. I inherited the database and it was working fine for a while, but because of some glitches with the host, I am exploring other options. I'll post my questions in the appropriate forum. I'm basically trying to figure out how to build indexes and write a database search page. When I started working on this project, last year in November, I started teaching myself Java, and by February, I'd written some tools to help manage the project. Lately, I've been teaching myself PHP and MySQL towards the possibility of solving my database problems.

I look forward to working with everyone to learn about databases!

Alice in Illinois :)

Need help with indexing over many files

$
0
0
I've inherited about 7000 database files that use a particular format where the fields are separated by semicolons -- each file has about 2500 lines. The current set-up is that the files reside on one server, and the searching is done by another server. Unfortunately, I only have access to the server where the files reside. The search tools on the other one are broken and I don't know if they will ever be fixed. So I'm trying to figure out how to set up my own search site, which needs to be for free if at all possible. My programming experience is in C, and now Java. I've started learning PHP and MySQL. I am certain I can handle the PHP part and the basic MySQL stuff, but I know I need to build indexes (indices) in order to efficiently do my searches. I found the command for building indexes in MySQL, but my question is, how do I build indexes when the data is spread across several files? And a corollary is, how do I tell the index process what the schema is for the index building process? I'm hoping that I can build the indexes in one place while leaving the database files on the current server, but if necessary I guess I can copy them all over to a new server, but again, I need a free situation, because this is a volunteer project. I'm not getting paid for any of this.

I apologize that I don't know all of the database jargon yet, so please try to explain things in basic terms. Thanks for any help you can give me.

Alice in Illinois

Oracle 12c not able to pick up latest stats

$
0
0
HI there,

I am new to this community and so not aware of the rules of this forum.

I am in middle of a problem since last few months and thought of seeking your expert advice.

We recently have migrated to oracle 12c at one of our client location who was earlier using Oracle 11g.

There is a performance degradation since moving to Oracle 12c as compared to Oracle 11g for same set of data and process.

While optimising the performance, what i have seen is there is a process which creates new entries in few of the tables in schema. The package which has lot of open cursors reads information from these tables and writes in corresponding table of same schema when package is executed. However, this process is slow if gather_schema_stats (granuality = GLobal) is not used after entry is made in the new tables and before the package gets executed.

For eg: I have tested the below scenarios and below are the results:
1. Run Gather stats on the schema - > Process to create new entries in table -> Run Package for new entries in table
Outcome - Package execution time is 4.5 hours
2. Process to create new entries in table -> Run Gather stats on the schema - > Run Package for new entries in table
Outcome - Package execution time is 9 minutes

We also have used optimiser dynamic sampling and in_memory features but still issue exists.

The problem is we cannot use gather_schema_stats in between the process and it has to be run only once either before or after the complete process.

We also tried to replace gather_schema_stats with gather_stats for only tables and indexes but no effect.

gather_schema_stats is a very time consuming activity and cannot be used.

Package with only spec

$
0
0
Hi all,
I am newbie in PL/SQL programming and I would like to ask you if there is a library of constants in Oracle ?

Is it a good idea to create a package spec with constants in order to use these values in another packages ?

For example I want to find all the chr(num) and define them as constants

Thank you in advance

function with many parameters raise error

$
0
0
Hello,
In PostgreSQL 9.3 I have function with many parameters
Code:

CREATE OR REPLACE FUNCTION public.pd_update_product(p_id integer, p_title character varying, p_status type_productstatus, p_sku character varying, p_user_id integer, p_regular_price type_money, p_sale_price type_money, p_in_stock boolean, p_short_description character varying, p_description text, p_has_attributes boolean, p_downloadable boolean, p_virtual boolean, p_category_list integer[] DEFAULT NULL::integer[], p_tag_list integer[] DEFAULT NULL::integer[], p_attributes_list character varying[] DEFAULT NULL::character varying[], p_created_at timestamp without time zone DEFAULT NULL::timestamp without time zone)
 RETURNS integer
 LANGUAGE plpgsql
AS $function$
  begin
    IF p_id <= 0 THEN
...

Running sql
Code:

select * from pd_update_product( 0, 'Product title_2016-11-07 07:34:14', 'A', 'sku_2016-11-07 07:34:14', 165, 98.76, 65.32, TRUE, 'short_description_2016-11-07 07:34:14', 'description_2016-11-07 07:34:14', TRUE, TRUE, TRUE, ARRAY[164,161,169]::integer[], ARRAY[164,168,172]::integer[], ARRAY [ ARRAY[ 'S:61','800' ], ARRAY[ 'S:63','840' ], ARRAY[ 'S:64','851' ] ]::varchar(255)[][] )
It works ok

But when I run sql showing parameters name, like :
Code:

select * from pd_update_product( p_id := 0, p_title := 'Product title_2016-11-07 07:34:14', p_status := 'A', p_sku := 'sku_2016-11-07 07:34:14', p_user_id := '165', p_regular_price := 98.76, p_sale_price := 65.32, p_in_stock := TRUE, p_short_description := 'short_description_2016-11-07 07:34:14', p_description := 'description_2016-11-07 07:34:14', p_has_attributes := TRUE, p_virtual := TRUE, p_downloadable := TRUE, p_category_list := ARRAY[164,161,169]::integer[], p_tag_list := ARRAY[164,168,172]::integer[], p_attribute_list := ARRAY [ ARRAY[ 'S:61','800' ], ARRAY[ 'S:63','840' ], ARRAY[ 'S:64','851' ] ]::varchar(255)[][] )
I got error:
Code:

no function matches the given name and argument types. You might need to add explicit type casts.
Last parameter p_attribute_list raise error : if to remove it in last sql - it would work ok.
Why error, as 1st sql works ok and I do not see the syntax difference and how to fix it?

Thanks!

UNIQUE constraint WITH NOCHECK and 'NUMERIC_ROUNDABORT' error

$
0
0
Hi,

We have a table with duplicates. To prevent any more from entering the system, it would have been great if we could just add a UNIQUE constraint WITH NOCHECK on the table. But you can't do that with a unique constraint.

As one possible solution, I have added an extra column to the table. That extra column stores the id of the record if there is at least one duplicate record with a lower id, and created a unique index on the PK + that extra column. I was looking for a solution without an extra column, but that generates an error. (If you are puzzled, the code is at the bottom of the post.)

DaTable is a small test-table with a few duplicate and a not-duplicate record.
Code:

DROP TABLE dbo.DaTable;

CREATE TABLE dbo.DaTable (
        ID INT NOT NULL        IDENTITY CONSTRAINT PK_DaTable PRIMARY KEY,
        Naam varchar(30) NOT NULL
) ;
GO

INSERT  INTO dbo.DaTable(Naam) VALUES 
('Tom'),
('Sjerk'),
('Sjerk'),
('Youssef'),
('Youssef'),
('Youssef')
;
GO

SELECT * from dbo.DaTable

CREATE UNIQUE INDEX UNQ_DaTable_UniekeNieuweNamen ON dbo.DaTable(Naam) WHERE id > 6;
GO
--Msg 1934, Level 16, State 1, Line 1
--CREATE INDEX failed because the following SET options have incorrect settings: 'NUMERIC_ROUNDABORT'.
Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query
notifications and/or XML data type methods and/or spatial index operations.

I have no idea where this 'NUMERIC_ROUNDABORT' error comes from or what to do to solve it. Where does the loss of precision occur??

Or perhaps you know a better way to deal with the problem (existing table with duplicates, and you want it to stop from getting worse).

(This is my first solution, with the extra column. All comments are in Dutch, if you'd wonder)
Code:

DROP TABLE dbo.DaTable;

CREATE TABLE dbo.DaTable
  (
    ID INT NOT NULL        IDENTITY
                CONSTRAINT PK_DaTable PRIMARY KEY,
    Naam VARCHAR(30) NOT NULL
  ) ;
GO

INSERT  INTO dbo.DaTable(Naam) VALUES 
('Sjerk'),
('Sjerk'),
('Youssef'),
('Youssef'),
('Youssef'),
('Tom')
;
GO

-- markeer duplicaten
ALTER TABLE dbo.DaTable ADD Duplicaat_id int NULL ;
GO

UPDATE  dbo.DaTable
SET    Duplicaat_id = ID
WHERE  EXISTS ( SELECT *
                FROM  dbo.DaTable AS DT
                WHERE  DT.Naam = dbo.DaTable.Naam
                        AND DT.ID < dbo.DaTable.ID )
;
GO

SELECT * FROM dbo.DaTable

CREATE UNIQUE INDEX UNQ_DaTable_UniekeNieuweNamen ON dbo.DaTable(Naam, Duplicaat_id);
GO

-- bestaande naam toevoegen, geeft fout: ok
INSERT  INTO dbo.DaTable (Naam)
VALUES  ('Tom');
--Msg 2601, Level 14, State 1, Line 1
--Cannot insert duplicate key row in object 'dbo.DaTable' with unique index 'UNQ_DaTable_UniekeNieuweNamen'. The duplicate key value is (Tom, <NULL>).

-- bestaande naam toevoegen die reeds een duplicaat heeft , geeft fout: ok
INSERT  INTO dbo.DaTable (Naam)
VALUES  ('Sjerk');
--Msg 2601, Level 14, State 1, Line 2
--Cannot insert duplicate key row in object 'dbo.DaTable' with unique index 'UNQ_DaTable_UniekeNieuweNamen'. The duplicate key value is (Sjerk, <NULL>).

-- een nieuwe naam toevoegen, insert lukt : ok
INSERT  INTO dbo.DaTable (Naam)
VALUES  ('Wim');

-- lukt maar één keer, geeft fout: ok
INSERT  INTO dbo.DaTable (Naam)
VALUES  ('Wim');
--Msg 2601, Level 14, State 1, Line 1
--Cannot insert duplicate key row in object 'dbo.DaTable' with unique index 'UNQ_DaTable_UniekeNieuweNamen'. The duplicate key value is (Wim, <NULL>).

interview questions

$
0
0
Please share technical interview questions
Viewing all 13329 articles
Browse latest View live