Friday 15 July 2016

Why DBAs should be very interested to IoT projects


IoT is pervading our lifes through many of its daily use applications, from Fitbit to smart home automation tools. But where the IoT is more likely to be revolutionary is on different sectors:

·         In Industry, in Smart Cities or in building management, the smart connected IoT sensors are becoming so cheap that they'll soon pervade all critical equipment. This will allow for a better control of this equipment that will not only allow to be more reactive to failures, but to put in place Predictive Failure Analysis where we use Planned Preventative Maintenance which is more expensive. 

·         In transportation and logistics IoT may bring not only a better control on equipment but on all of the supply chain. The sensitive good themselves will be followed during their travel from departure to destination with a precise recording of shocks, vibrations, temperature and humidity variations, etc.

·         Telemedicine, for example is forecasted to be revolutioned by IoT, producing great quantities of sensitive data.

 
All these applications will cause a huge amount of data to be created and to be stored in databases. This is a challenge for DBAs which will see in parallel of the usual application-driven databases, appear less structured databases fed by thousands or tens of thousands of remote IoT devices.

Most of the time there will be real-time analytics that will have to run on those data, in order to take quick decisions (whether reactive or proactive) creating challenges in terms of space, of performance or on how to historicize or to purge this data that is produced in huge quantities. The most reactive a decision must be taken, the more frequently the data must be measured and this creates a high volume of data to be transferred, stored and analyzed. While we need to have detailed/fresh data to take reactive decisions, most of it is of no or little use after it was created and analyzed. To draw a daily report of a fluid temperature, a read every 5 minutes will be sufficient, while a may need 1 measure per second (300 times more) to be reactive enough in my industrial process.

So the question comes in : does it makes sense to centralize huge amounts of data and have to run centralized analytics on an infratructure that may be difficult to up-size to allow it to follow a growth that may be exponential?

The CERN which has a data flow coming form its experiments of about 25GB/s (Gigabyte per second)  has developed a two-stage Process to handle such a huge amount of data: the first stage uses algorithms to select the events that need to be further analyzed and discard the events that have no interest.

 This is similar to the process that can be put in place near the IoT sensors (at the "edge") thanks to the specialized IoT Edge Gateway proposed by Dell that is able to collect IoT data and to run analytics on it, to take immediate decisions and to push the valuable info (and only that one) to a centralized location where it will be stored for future and detailed analysis.


Dell's IoT Edge gateways are rugged fanless gateways powered with Celeron processors that will connect to wired and wireless IoT devices and through Statistica -or with the middleware of your choice- be able to aggregate and analyze their output, allowing to take immediate decisions and to send only the meaning ful data to your databases, saving also precious bandwidth. They can be placed on factory or buildings where temperatures are harsh or even embarked on trucks.



 The Dell IoT Edge Gateways also can connect to the devices through IoT typical protocols such as ModBus, BACnet, ZigBee, etc. and it will allow for a high level of security in data transmission, it will be therefore a mandatory component of all IoT architectures.

 In conclusion, the DBA should participate from the early stages to IoT Project, evaluate the quality and quantity of data that will be produced and ensure that decentralized analytics are put in place at the edge so that only meaningful data is upladed in the databases he'll manage. 

Wednesday 14 September 2011

Problem: Clustered SQL 2008R2 setup crashed and SQL AGENT will not start

Description of the problem

A few days ago I was adding the third instance to a two-node SQL 2008R2 EE on a Windows 2008R2 EE failover cluster, while near to the end of the setup process of the first node, the setup.exe crashed, leaving the installation in an incomplete state.
I tried to run the repair tool but it would not work as "the installation has never been configured".
A similar crash of the setup.exe happened to me once before, but as it was the first instance setup, I preferred restoring the image of the server that had been made just after having finalized the Windows cluster configuration.
This time it was different: I already had two working instances running and I did not want to start from scratch nor cause downtime, etc.
Uninstalling a specific SQL Server Instance is not very easy when everything went the right way, removing a node from a Cluster could be even more challenging. I thought I had to try to manually repair the installation of the first node and then try to add the second node to the cluster.

Evaluating the situation

I started looking at how dramatic the situation could be: what had been installed, what would run and what would not. The SQL Server and SQL Server Agent services where present in the SQL Server Configuration manager but they were not started .
While I could find the SQL Server resource  in the Cluster Manager, The SQL Agent was nowhere. I tried to bring the Cluster resource online : Network, Disks and SQLServer started gracefully. I felt some relief, the fact that the cluster resource was online gave me some hope!
I went back to the SQL Server Configuration manager and I tried to start the SQL Server Agent service: it fell back so I went to the Application and System Event Logs and I found the errors logged by the SQL Server Agent service while it was trying to start. It cried a lot in fact, saying that it could not find the SQL instance it belonged.
Some of the error messages sounded like:

SQLServer Error: 65535, SQL Network
Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].
And
Logon to server 'ClusterResource\InstanceName'
failed (DisableAgentXPs)
Strangely the cluster resource and the instance names were correct and after starting SQL Server I had successfully connected to it through SSMS.

Making the SQL Server agent service start

I Googled around and found and found this illuminating Post that explained me where the lied the Registry keys I was sure that I would have to tweak:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.InstanceName\SQLServerAgent]
There I was told to complete the ServerHost key with its value:  
"ServerHost"="ClusterResourceName\\InstanceName"
Again I tried to start the Service from SQL Configuration Manager and this time it started gracefully. So first step was done, but now I had to add the service to the Cluster resource.

Adding the SQL Server agent service to the cluster

How to add a specific service to a Cluster resource? Not by using basic Cluster.exe syntax like
cluster restype "SQL Server Agent" /create /DLL:sqagtres.dll
as I have found around in a very useful MSDN forum post. This was wrong because I already had a generic "SQL Server Agent" resource in the cluster (remember I already had two SQL Server instances installed?). Plus I needed to add the specific SQL Server Agent that was part of the instance I was installing (i.e. "SQL Server Agent (InstanceName)".

These are the steps I took:

-In the Failover Cluster Management I used Add Resource to add the SQL Server Agent resource.
-I named the resource as “SQL Server Agent (InstanceName)” after the real Instance name.
-Right-clicking on the SQL Agent resource I went to Properties and filled in values for the two parameters:
  1. The Virtual Server name of the SQL instance
  2. The Instance name
Now added the SQL Server Resource as a dependency for the newly created SQL Server Agent resource.
But one more magic step was necessary.

Make the system believe that the features were correctly installed and configured

Without this operation all above will fail. I also thought that if I had found this tweak before, I might have tried to repair the installation with the repair tool from the very start!
I opened the Registry Editor and browsed to the following location
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft Microsoft SQL Server\MSSQL10.INSTNAME\ConfigurationState
(this is a named instance) Here you have to make sure that the values for all the keys present are 1. My values for all the four features listed were "2" so I changed them all. This will set the configuration state for all SQL features "installed and configured". A value greater than 1 indicates either an install failure or configuration failure.  

Bringing online the cluster resource

It was time to try to bring the resource online, and It worked perfectly. I checked for eventlog errors but could find none. It was time to install the second node.

Adding a node to a SQL Cluster

Well, believe it or not, I really thought it would have been a straightforward operation, now that I had patched my setup on the first node. Of course, I was wrong.
During the simplified "add node to existing cluster" SQL setup process, Setup.exe asks less questions because once you have agreed that you are adding a node to the proposed instance, you're just asked with the passwords of the SQL Server and of the SQL Server Agent service accounts (those service accounts being read-only as they were of course already defined during the first node setup).
So what a surprise when setup.exe asked me for the password of an SQL server Agent's EMPTY service account! I sighed, cancelled the setup and went back to the first node. What to do next?
Well, I found out that it was my fault! I had not executed the fourth step that was advised in the MSDN forum: i.e. running a repair of the failed SQL Server installation!
This time the repair went straightforward. I launched the setup of the second node again and everything went perfectly. I then could do the usual post-setup tests and configuration operations and I am now the happy Administrator of my 16th clustered SQL Server instance!
   
Resources:
Let SQL Server Agent service start
http://www.eggheadcafe.com/software/aspnet/32049165/i-surrender-sql-server-agent-fails-to-start-2-node-cluster.aspx

SQL Server Agent not listed as cluster resource (in MSDN SQL Server forums)
http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/5049fc44-0f44-4caf-9c5a-282edf30d878

How to: Repair a Failed SQL Server 2008 Installation (in MSDN library)
http://msdn.microsoft.com/en-us/library/cc646006.aspx
 

Sunday 15 November 2009

Database logins and roles script

this script readls the roles for a database server and then creates directly a script that can be run against a different server in order to re-crate logins and roles.




-- SCRIPT TO CREATE THE ROLES CREATION SCRIPT
-- IN ORDER TO MOVE THE DATABASES TO ANOTHER SERVER
-- This script needs to be run on the original Server
-- and it will output the necessary SQL instructions
-- that you will run on the new server to replicate
-- the original server roles configuration

-- Originally found on http://www.sqlservercentral.com/Forums/Topic508244-359-1.aspx -- Fabrice Faleni

--Role Memberships'
DECLARE @DBName VARCHAR(64)
DECLARE c1 CURSOR FOR

SELECT name
FROM MASTER..sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb')

OPEN c1
FETCH NEXT FROM c1
INTO @DBName
WHILE @@FETCH_STATUS <> -1

BEGIN

PRINT @DBName



EXEC( 'USE ' + @DBName + ' GO;')



SELECT --rm.role_principal_id,

'EXEC sp_addrolemember @rolename =' + Space(1) + Quotename(User_name(rm.role_principal_id),'''') + ', @membername =' + Space(1) + Quotename(User_name(rm.member_principal_id),'''') AS '--Role Memberships'

FROM sys.database_role_members AS rm

ORDER BY rm.role_principal_id



--Object Level Permissions'

SELECT CASE

WHEN perm.state != 'W'

THEN perm.state_desc

ELSE 'GRANT'

END + Space(1) + perm.permission_name + Space(1) + 'ON ' + Quotename(Schema_name(obj.schema_id)) + '.' + Quotename(obj.name) COLLATE latin1_general_ci_as_ks_ws + CASE

WHEN cl.column_id IS NULL

THEN Space(0)

ELSE '(' + Quotename(cl.name) + ')'

END + Space(1) + 'TO' + Space(1) + Quotename(usr.name) + CASE

WHEN perm.state <> 'W'

THEN Space(0)

ELSE Space(1) + 'WITH GRANT OPTION'

END AS '--Object Level Permissions'

FROM sys.database_permissions AS perm

INNER JOIN sys.objects AS obj

ON perm.major_id = obj.[object_id]

INNER JOIN sys.database_principals AS usr

ON perm.grantee_principal_id = usr.principal_id

LEFT JOIN sys.columns AS cl

ON cl.column_id = perm.minor_id

AND cl.[object_id] = perm.major_id

ORDER BY usr.name



--Database Level Permissions'

SELECT CASE

WHEN perm.state <> 'W'

THEN perm.state_desc

ELSE 'GRANT'

END + Space(1) + perm.permission_name + Space(1) + Space(1) + 'TO' + Space(1) + Quotename(usr.name) COLLATE database_default + CASE
(0)

ELSE Space(1) + 'WITH GRANT OPTION'

END AS '--Database Level Permissions'

FROM sys.database_permissions AS perm

INNER JOIN sys.database_principals AS usr

ON perm.grantee_principal_id = usr.principal_id

WHERE --usr.name = @OldUser

--AND

perm.major_id = 0

ORDER BY perm.permission_name ASC,

perm.state_desc ASC

FETCH NEXT FROM c1
INTO @dbname
END
CLOSE c1
DEALLOCATE c1

Wednesday 29 April 2009

Non sono morto!

Mi manca proprio il tempo di scrivere! Mi sono certificato 70-282 e ora sto preparando la 70-631 (SharePoint Configuration)

Friday 24 October 2008

Copying and moving files or folders within or between NTFS volumes

Copying and moving files or folders within or between NTFS volumes may affect the original
permissions set on these objects. Look at the following table to see how:



Remember that the owner will also change, becoming the user that has copied the files.