Thinking in ORA-494, ORA-239 Instance Crashes and Hangs

I am sharing some of my little thought of troubleshooting Oracle database or instance crashes by ORA-00494 or ORA-00239 error from different scenarios. This Oracle database or instance crashes can occur on any OS platform with Oracle Server Enterprise Edition from version 10.2.0.4 to 11.1.0.7.

Note:
• The term INCIDENT referred in the following parts represents the incident caused by ORA-494 or ORA-239 error.

1.1 Root Cause Analysis

This database or instance crashes incident can occur in different services running on Oracle RDBMS with different scenarios, it will terminate the Oracle database processes and leads the database system to an outage status which will disrupt customers’ activities and negatively impact the SLA for database support.

Root Cause Analysis is necessary to prevent the similar incident re-occurring in the future.

1.1.1 Error Messages

Two Oracle error messages could be found for this kind of incident:

  • ORA-00239

Blocker process is killed by ITSELF. For example, LGWR process terminates LGWR process itself, etc.

  • ORA-00494

Blocker process is killed by OTHER process. For example, LGWR process terminates CKPT process, etc.

Note:

  • In some database hangs scenarios, there is no any Oracle error message can be found in ALERT log file.

 

1.1.2 Symptoms

There are two kinds of symptoms can be found on the problematic Oracle database server when this incident occurs:

  • Instance crashes
  • Database hangs

 

1)      Instance crashes

Instance crashes due to background or foreground process terminated by Oracle database server.

Error messages indicating this symptom can be found in the corresponding ALERT log file. If any of the Oracle key background processes is terminated, the Oracle instance crashes and future client/server connection to Oracle database will be refused.

Here is an ORA-494 sample from a cited ALERT log file:

Sat Sep 25 08:06:57 2010

LGWR started with pid=13, OS id=1464

Sat Sep 25 08:06:57 2010

CKPT started with pid=14, OS id=2548

Wed Oct 06 16:52:01 2010

Archived Log entry 17453 added for thread 1 sequence 34734 ID 0xb691ccee dest 1:

Wed Oct 06 16:52:50 2010

Thread 1 advanced to log sequence 34736 (LGWR switch)

  Current log# 2 seq# 34736 mem# 0: L:\ORACLE\REDOLOG\DWHPROD\REDO02A.RDO

  Current log# 2 seq# 34736 mem# 1: D:\ORACLE\REDOLOG\DWHPROD\REDO02B.RDO

Wed Oct 06 16:53:10 2010

Archived Log entry 17454 added for thread 1 sequence 34735 ID 0xb691ccee dest 1:

Wed Oct 06 16:53:56 2010

Thread 1 advanced to log sequence 34737 (LGWR switch)

  Current log# 3 seq# 34737 mem# 0: L:\ORACLE\REDOLOG\DWHPROD\REDO03A.RDO

  Current log# 3 seq# 34737 mem# 1: D:\ORACLE\REDOLOG\DWHPROD\REDO03B.RDO

Wed Oct 06 16:54:17 2010

Archived Log entry 17455 added for thread 1 sequence 34736 ID 0xb691ccee dest 1:

Wed Oct 06 16:55:02 2010

Thread 1 advanced to log sequence 34738 (LGWR switch)

  Current log# 4 seq# 34738 mem# 0: L:\ORACLE\REDOLOG\DWHPROD\REDO04A.RDO

  Current log# 4 seq# 34738 mem# 1: D:\ORACLE\REDOLOG\DWHPROD\REDO04B.RDO

Wed Oct 06 16:55:22 2010

Archived Log entry 17456 added for thread 1 sequence 34737 ID 0xb691ccee dest 1:

Wed Oct 06 16:56:07 2010

Thread 1 advanced to log sequence 34739 (LGWR switch)

  Current log# 5 seq# 34739 mem# 0: L:\ORACLE\REDOLOG\DWHPROD\REDO05A.RDO

  Current log# 5 seq# 34739 mem# 1: D:\ORACLE\REDOLOG\DWHPROD\REDO05B.RDO

Wed Oct 06 16:56:27 2010

Archived Log entry 17457 added for thread 1 sequence 34738 ID 0xb691ccee dest 1:

Wed Oct 06 16:56:45 2010

Errors in file f:\oracle\diag\rdbms\DWHPROD\DWHPROD\trace\DWHPROD_lgwr_1464.trc (incident=427828):

ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by ‘inst 1, osid 2548’

Incident details in: f:\oracle\diag\rdbms\DWHPROD\DWHPROD\incident\incdir_427828\DWHPROD_lgwr_1464_i427828.trc

Killing enqueue blocker (pid=2548) on resource CF-00000000-00000000

 by killing session 555.1

by terminating the process

LGWR (ospid: 1464): terminating the instance due to error 2103 

Here is an ORA-239 sample from a cited ALERT log file:

Errors in file f:\oracle\diag\rdbms\DWHPROD\DWHPROD\trace\DWHPROD_lgwr_1004.trc  (incident=448308):

ORA-00239: timeout waiting for control file enqueue: held by ‘inst 1, osid 4604’ for more than 900 seconds

Incident details in: f:\oracle\diag\rdbms\DWHPROD\DWHPROD\incident\incdir_448308\DWHPROD_lgwr_1004_i448308.trc

opidrv aborting process LGWR ospid (2628_1004) due to error ORA-603

Wed Nov 03 00:18:52 2010

Errors in file f:\oracle\diag\rdbms\DWHPROD\DWHPROD\trace\DWHPROD_pmon_844.trc:

ORA-00470: LGWR process terminated with error

PMON (ospid: 844): terminating the instance due to error 470

Wed Nov 03 00:18:52 2010

Errors in file f:\oracle\diag\rdbms\DWHPROD\DWHPROD\trace\DWHPROD_j000_9792.trc:

ORA-00470: LGWR process terminated with error

Wed Nov 03 00:18:53 2010

Errors in file f:\oracle\diag\rdbms\DWHPROD\DWHPROD\trace\DWHPROD_q001_5764.trc:

ORA-00470: LGWR process terminated with error

Instance terminated by PMON, pid = 844

2)      Database hangs

Database hangs due to system resource busy and database does not respond for a long period of time.

No explicit error message indicating this symptom can be found in the corresponding ALERT log file, but the ALERT log file shows the database is not running expectedly with redo log switching operations for a long period of time. The database system will remain in a degradation state and future client/server connection to Oracle database may be accepted will not be established successfully.

Here is a sample from a cited ALERT log file:

Tue Oct 26 03:45:03 2010

Thread 1 cannot allocate new log, sequence 35778

Private strand flush not complete

  Current log# 5 seq# 35777 mem# 0: L:\ORACLE\REDOLOG\DWHPROD\REDO05A.RDO

  Current log# 5 seq# 35777 mem# 1: D:\ORACLE\REDOLOG\DWHPROD\REDO05B.RDO

Thread 1 advanced to log sequence 35778 (LGWR switch)

  Current log# 6 seq# 35778 mem# 0: L:\ORACLE\REDOLOG\DWHPROD\REDO06A.RDO

  Current log# 6 seq# 35778 mem# 1: D:\ORACLE\REDOLOG\DWHPROD\REDO06B.RDO

Tue Oct 26 03:45:06 2010

Archived Log entry 18496 added for thread 1 sequence 35777 ID 0xb691ccee dest 1:

Tue Oct 26 10:02:38 2010

Starting ORACLE instance (normal)

1.1.3 OEM Monitor

OEM Grid Control infrastructure monitors Oracle database and instance availability and other system health statistics.

Due to different symptoms mentioned above, OEM may not be able to detect or report the potential incident underneath in time.

1)      Instance crashes

OEM is able to detect this incident by the rejected connection return code to the crashed instance. An OEM alarm will be generated once OEM Agent fails to connect to the instance after the crash. Alarm related Support Request will be assigned to database support queue according to the level of impact.

Here is a sample of the OEM alarm:

OEM: DWHPROD: Failed to connect to database instance: ORA-01034: ORACLE not available

2)      Database hangs

OEM may not be able to detect this incident in time due to OEM Agent connection to Oracle database may also hang for a long time of period. One or more OEM alarms will be generated after the timed-out mechanism of client/server connection. One or more alarm related Support Request’s will be assigned to database support queue according to the level of impact.

Here is a sample of the OEM alarm:

OEM: DWHPROD: Failed to connect to database instance: ORA-01034: ORACLE not available

Alarms of listener or other targets monitored by OEM may also be generated depends on the level of degradation:

OEM: pmichlaudwh28.PMINTL.NET: The listener is down: TNS-12571: TNS: packet write failure

1.1.4 Root Cause

The incident caused by ORA-00494 error is identified as an Oracle database server bug which is published on Oracle Support website:

 Bug 7692631 – DATABASE CRASHES WITH ORA-494 AFTER UPGRADE TO 10.2.0.4

Root cause of this bug is Oracle database Kill Blocker Interface feature which has been introduced since Oracle database 10.2.0.4 version. As this is a proactive mechanism to prevent the instance in a cluster wide hang state, Oracle Support finally relates this incident to the unpublished Bug 7914003 – ‘KILL BLOCKER AFTER ORA-494 LEADS TO FATAL BG PROCESS BEING KILLED’.

The following Root Cause Analysis explains the full image of the incident respectively according to the above two symptoms.

1.1.4.1 Instance crashes

The first entry point to diagnose this incident is located in the ALERT log file clearly with further trace file and dump information.

Here is the related trace file for background process LGWR:

*** 2010-10-06 16:31:47.336

Warning: log write time 550ms, size 1KB

*** 2010-10-06 16:32:13.398

Warning: log write time 1260ms, size 4KB

*** 2010-10-06 16:32:38.883

Warning: log write time 1550ms, size 1KB

*** 2010-10-06 16:51:25.329

Warning: log write time 500ms, size 848KB

*** 2010-10-06 16:56:45.686

Unable to get enqueue on resource CF-00000000-00000000 (ges mode req=4 held=6)

Possible local blocker ospid=2548 sid=555 sser=1 time_held=1286377005 secs (ges mode req=6 held=4)

DUMP LOCAL BLOCKER: initiate state dump for KILL BLOCKER

  possible owner[14.2548] on resource CF-00000000-00000000

Dumping process info of pid[14.2548] requested by pid[13.1464]

Incident 427828 created, dump file: f:\oracle\diag\rdbms\DWHPROD\DWHPROD\incident\incdir_427828\DWHPROD_lgwr_1464_i427828.trc

ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by ‘inst 1, osid 2548’

Killing enqueue blocker (pid=2548) on resource CF-00000000-00000000

 by killing session 555.1

Kill session 555.1 failed with status 29

Killing enqueue blocker (pid=2548) on resource CF-00000000-00000000

 by terminating the process

Killing fatal process ospid 1752161928

Issue instance termination

Combining the ALERT log file, trace file and dumped object states, the Five Ws – WHO, WHAT, WHEN, WHY, HOW related to this incident will be addressed step by step as follow.

  • WHO

The key information is Resource Owner and Resource Requestor who initiates the special behavior eventually leads the system to a crash.

 

1)       Resource Owner

This shows who is holding the contention resource before resource request sent by other process. In this case, Oracle CKPT process is the resource owner, whose Oracle PID is 000E in hexadecimal and 14 in decimal and identified in additional trace file.

Here is the valuable information in the additional trace file:

SO: 0x0000000474C98690, type: 7, owner: 0x0000000474AC49A0, flag: INIT/-/-/0x00 if: 0x1c: 0x1

               proc=0x00000004686FD9E0, name=enqueue, file=ksq1.h LINE:234, pg=0

              (enqueue) CF-00000000-00000000 DID: 0000-000E-00000003

              lv: 35 b2 a5 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x2

              mode: SS, lock_flag: 0x10, lock: 0x0000000474C986E8, res: 0x0000000474E08448

              own: 0x0000000474AAD770, sess: 0x0000000474AAD770, proc: 0x00000004686FD9E0, prv: 0x0000000474E08458

2)       Resource Requestor

This shows who is requesting the contention resource while blocking by other process. In this case, Oracle LGWR process is the resource requestor, which identified in ALERT log file.

Note:  Depends on different system activities, the resource owner and requestor could be other Oracle background or foreground process, i.e. ARCn process, RMAN backup job, application process, etc.

  • WHAT

The key information is Contention Resource which is considered as the ‘bottleneck’ in the resource allocation flow.

1)       Contention Resource

This shows what database resource is being acquired and blocked by other process which eventually causes the contention. In this case, CF-00000000-00000000 is the contention resource, which is an enqueue of Oracle control file resource and identified in additional trace file.

  • WHEN

The key information is the date and time the incident happens which can be used to measure the SLA afterwards. This information can be found in either Oracle database side like ALERT log file and trace file or OEM alarm. In some cases, this information could also be found in backup reports or application logs. In this case, the instance crashed at the following time showed in ALERT log file:

Wed Oct 06 16:56:45 2010

Errors in file f:\oracle\diag\rdbms\DWHPROD\DWHPROD\trace\DWHPROD_lgwr_1464.trc

  • WHY

Different symptoms could be caused by the same root cause although different scenarios. Finding out the reason is significant for relating different Support Request to the same Instability and prevent the same incident re-occurring in the future.

In this incident, during the problematic period, Oracle CKPT process is updating all the control files and all the 34 data files except the other temporary files, while Oracle LGWR is initiating the log switch operation. Obviously, Oracle database system is running into the typical checkpoint process for updating control files and data files header for consistency and durability purposes:

  • CKPT process updates database files header except temporary files
  • CKPT process updates control files
  • LGWR process performs log switch operation
  • LGWR process waits for control file enqueue

 

In the case, LGWR shows a slow performance which may be caused by I/O subsystem in storage layer or resource contention from other background or foreground processes, and this is also recorded in the above LGWR process trace file with every single redo log write operation longer than 500ms.

Online redo log switch operation takes place while checkpoint operation holding the dedicated enqueue resource which has been accumulated for more than 900 seconds, triggered the Oracle Kill Blocker mechanism and eventually force the database server to kill the background process blocker CKPT. As this incident happened on Microsoft Windows platform, on which Oracle running in multi-threading architecture other than multi-processing on Unix-like platform, the killing session behavior will be escalated to killing ORACLE.EXE process and Oracle instance will be terminated unexpectedly and crashes with an inconsistent state.

The following figure shows the general activities of these two processes caused the incident:

 

Here is the related trace file for dump information of different object states:

Dump of memory from 0x000000046B1A07B8 to 0x000000046B1A0810

        46B1A07B0                   00000101 00000000          [……..]

        46B1A07C0 00000109 00000000 686FD9E0 00000004  [……….oh….]

        46B1A07D0 686FD9E0 00000004 6B1A06A8 00000004  [..oh…….k….]

        46B1A07E0 686FDA40 00000004 00000000 00000000  [@.oh…………]

        46B1A07F0 00000000 00000000 00000000 00000000  […………….]

          Repeat 1 times

            (FOB) flags=2050 fib=000000047747DE88 incno=0 pending i/o cnt=0

             fname=H:\DWHPROD_DATA2\ORADATA\INDEX_SMB2010_2.DBF

             fno=34 lblksz=16384 fsiz=50808 

        Dump of memory from 0x000000046B1A0688 to 0x000000046B1A06E0

        46B1A0680                   00000101 00000000          [……..]

        46B1A0690 00000109 00000000 686FD9E0 00000004  [……….oh….]

        46B1A06A0 686FD9E0 00000004 6B1A0578 00000004  [..oh….x..k….]

        46B1A06B0 6B1A07D8 00000004 00000000 00000000  […k…………]

        46B1A06C0 00000000 00000000 00000000 00000000  […………….]

          Repeat 1 times

            (FOB) flags=2050 fib=000000047747DA38 incno=0 pending i/o cnt=0

             fname=H:\DWHPROD_DATA1\ORADATA\INDEX_SMB2010_1.DBF

             fno=33 lblksz=16384 fsiz=50808

        Dump of memory from 0x000000046B1A0558 to 0x000000046B1A05B0

        46B1A0550                   00000101 00000000          [……..]

        46B1A0560 00000109 00000000 686FD9E0 00000004  [……….oh….]

        46B1A0570 686FD9E0 00000004 6B1A0448 00000004  [..oh….H..k….]

        46B1A0580 6B1A06A8 00000004 00000000 00000000  […k…………]

        46B1A0590 00000000 00000000 00000000 00000000  […………….]

          Repeat 1 times

            (FOB) flags=2050 fib=000000047747D600 incno=0 pending i/o cnt=0

             fname=H:\DWHPROD_DATA2\ORADATA\DATA_SMB2010_2.DBF

             fno=32 lblksz=16384 fsiz=126144

  • HOW

The key information is how the instance crashed. In this incident, this is clearly recorded by Oracle database server in ALERT log file because of the 900 second enqueue resource timeout triggered the KILL BLOCKER interface.

1.1.4.2 Database hangs

The other symptom of this kind of incident is database hangs for a long time of period. During the problematic period, all of the connections to Oracle database server are allowed as normal, no further log or trace information will be recorded on Oracle database server side. The whole Oracle database server will be put into a degradation state with all the key processes running normally.

Note:

  • OEM may not be able to detect this potential problem in time.
  • Two or more alarms may be generated for OEM monitored targets other than database

 

  • WHO

In this scenario, this information is not available on Oracle database side. No trace file will directly show this information.

  • WHAT

In this scenario, this information is not available on Oracle database side. No trace file will directly show this information.

  • WHEN

In this scenario, this information is not available on Oracle database side. No trace file will directly show this information. The down time could be calculated according to normal log switch operation interval and OEM alarms.

  • WHY

There are several approaches to find out the reason of the hang and related database activities. Statspack, AWR, ASH or particular trace events can be used to populate the real picture under the scene.

In this incident, AWR report has been generated to figure out the top level waiting event. Log file sync and control file sequential read are the two main contributors to the incident, because both of them consumed more than 96% of database running time.

Here is the sample of top waiting event in Oracle database system wide:

 

To further confirm the above cause, the 99.17% I/O WAIT is also proving the fact that the system was under high pressure of disk I/O activity. It could be because of physical file system contention or slow performance of hardware in storage layer.

Here is the sample of I/O WAIT statistics:

 

Furthermore, from the ALERT log before the time of incident was realized, redo log switch frequency were extremely high in every minute. This will cause one process holds control file resource enqueue or other mode of enqueue, while other process requesting the same enqueue resource. In this scenario, it can be convinced that control file is the contention resource because both of the above events will synchronize at least the Redo Byte Address and SCN information.

This could be cause by heavy client transaction rate, slow disk I/O performance, slow network I/O performance or storage problem.

  • HOW

The instance is not terminated but remains in a degradation state. All the foreground process and PGA have inconsistent information. The instance must be terminated manually and starts up again in order to be back to normal.

1.2 Solution

There are two categories of solution for this incident according to different critical level of circumstances:

  • Solve Incident
  • Prevent Incident

 

1.2.1 Solve Incident

When alarms are generated or support requests are assigned, it is critical that the database system could be brought back to normal for service accessing as soon as possible.

The following step can be followed to troubleshoot this kind of incident and bring the database service back online at the first time.

 

Step Description
1 Check ALERT log file and trace fie 
Shutdown clearly of Oracle instance 
Check database status in OEM

 

1.2.2 Prevent Incident

According to official Oracle support notes, the ORA-00494 or ORA-00239 error is triggered by the new introduced Oracle Kill Blocker interface and can certainly happen on any OS platform running with Oracle database server Enterprise Edition from 10.2.0.4 to 11g.

1.2.2.1 Risk Matrix

The following Risk Matrix is categorized according to impact level of Support Request and Alarm. According to the frequency of this incident, a Risk Matrix should be categorized to better understand the whole system impact. Therefore, a proactive action should be taken to prevent the incident re-occurring in the future. For example, Change coordination.

  

1.2.2.2 Change Coordination

To determine whether there is any other potential issue accompanies in the incident, the following points on incident server should be confirmed before raising a Change to implement the solution:

  • Small redo log member size
  • Few redo log groups
  • Frequent redo log switches
  • Disk I/O statistics via Operation Manager
  • Health status of shared storage device
  • Health status of TSM backup device
  • If server running under virtual machine
  • More detail information can be attained by assistance from TSER and Automation team

 

More related information can be attained by Oracle utilities on the incident server. The followings are two of the most useful utilities for further down the root cause:

  • ADRCI

Instruction of using ADRCI can be attained from Oracle Support website.

Here is a sample of invoking ADRCI for further investigation of the incident.

C:\Documents and Settings\user>adrci

ADRCI: Release 11.1.0.7.0 – Production on Sat Oct 30 13:44:54 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved

ADR base = “f:\oracle”

adrci> show trace -i 427828

Output the results to file: c:\docume~1\user\locals~1\temp\1\utsout_2564_6916_4.ado

  • RDA

Instruction of using RDA can be attained from Oracle Support website.

Here is a sample of invoking RDA for further investigation of the incident.

C:\Documents and Settings\user>rda.cmd

———————————————————————–

RDA Data Collection Started 01-Nov-2010 07:17:40

———————————————————————–

Processing Initialization module …

1.2.2.3 Oracle Database Interface

Two Oracle database hidden parameters can be reconfigured to prevent further incident happens by the same cause – Oracle Kill Blocker Interface mechanism:

1)      _kill_controlfile_enqueue_blocker = { TRUE | FALSE }

  • TRUE.   Default value. Enables this mechanism and kills blocker process in CF enqueue.
  • FALSE. Disables this mechanism and no blocker process in CF enqueue will be killed.

 

2)      _kill_enqueue_blocker = { 0 | 1 | 2 | 3 }

  • 0. Disables this mechanism and no foreground or background blocker process in enqueue will be killed.
  • 1. Enables this mechanism and only kills foreground blocker process in enqueue while background process is not affected.
  • 2.  Enables this mechanism and only kills background blocker process in enqueue.
  • 3.  Default value. Enables this mechanism and kills blocker processes in enqueue.

3)      _controlfile_enqueue_timeout = { INTEGER }

  • 900. Default value.
  • 1800. Optimum value to prevent enqueue timeout.

 

Note:

  • SPFILE can be backed up for roll back purpose before change implementation.

 

A slightly change has been made to the error message of this Oracle error. Here is the example of 10g and 11g. The key word ‘Potential’ is removed in 11g, therefore, the stronger you understand your database system, the more robust your system and service will be.

10g

===

GES: Potential blocker (pid=4840) on resource CF-00000000-00000000;

Killing enqueue blocker (pid=4840) on resource CF-00000000-00000000

11g

===

ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by ‘inst 1, osid 2548’

Killing enqueue blocker (pid=2548) on resource CF-00000000-00000000

2 Reference documents

 

  • Oracle Support Note – Database Crashes With ORA-00494 [ID 753290.1]
  • Oracle Support Note – Disk IO Contention Slow Can Lead to ORA-239 and Instance Crash [ID 1068799.1]
  • Oracle Support Note – ORA-00494 During High Load After 10.2.0.4 Upgrade [ID 779552.1]

Tags: , , , , ,

7 Responses to “Thinking in ORA-494, ORA-239 Instance Crashes and Hangs”

  1. vickyfin Says:

    I faced the same Issue today. But it was ARCH log which is having problem to keep up with the 17,000 log switches that our database is doing in a day as the size was 100MB each which has to be 2GB. I was not sure first why the ARCH is killing itself. Then when I read your doc. I understood everything. Checked the log switches and increased the size of the threads and all good. Thanks

  2. CKPT cause Database Abnormal shutdown « Shrikant's Blog Says:

    […] Nicely explain in depth, https://lifedba.wordpress.com/2011/02/04/thinking-in-ora-494-ora-239-instance-crashes-and-hangs/ […]

    • lifedba Says:

      Thanks for referencing my notes, hope it’s helpful to more and more Oracle DBAs and developers.

  3. smartacnesolution Says:

    There’s definately a great deal to find out about this subject. I love all the points you made.
    click the following web page
    and click the following website
    and click the following web page
    and also click for source

  4. #132 Says:

    It’s a pity you don’t have a donate button!

    I’d without a doubt donate to this fantastic blog! I suppose for now i’ll
    settle for book-marking and adding your RSS feed to my Google account.
    I look forward to fresh updates and will talk
    about this site with my Facebook group. Talk soon!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: