Category Archives: Troubleshooting

Sometimes, you have to fix it yourself

The Problem

SQL Server is a huge product with lots of moving parts. Bugs happen. Microsoft has a place to voice your issues or problems. They allow you to vote on the issue and then decide when or if it will get fixed. I’ve used Connect when I hit a bug and I have voted on items that were important to me. Recently I hit a bug in sp_createstats. I use this system stored procedure generate statistics in an automated process I’ve got that manages statistics. I added a new vendor database to the system and on the first run hit “Column ‘DAYSOPEN’ in table ‘dbo.TBL_OPPORTUNITY’ cannot be used in an index or statistics or as a partition key because it is non-deterministic.”. Well, we all know you can’t create stats on a computed column! I quickly went to the connect site and someone else had already entered it. The down side was it had so few votes it was only slated to go into the next cumulative update/service pack. When I hit this issue they hadn’t yet announced service pack 4. I already had this procedure coded into my routines and really didn’t want to rewrite them to get past this one problem.

The Solution

!!WARNING!!

By doing what I am about to describe could break at a later date or randomly kill baby kittens.

Since it is a system stored procedure I am loathe to make any changes to it directly. There are ways to modify some system stored procedures but they involve the installation CD and creativity. With that door closed there was only one avenue open to me. Create my own system stored procedure with the fix in it. There is a problem with this solution as well, if it gets dropped due to a service pack or an upgrade anything calling it will break. The first thing I did was to see if the procedure text was available by executing sp_helptext sp_createstats. Luckily it was! Now all I had to do was figure out where it was broken. The procedure is pretty simple and uses some cursors to loop through all the objects and create column statistics where they don’t exist.

declare ms_crs_cnames cursor local for select c.name from sys.columns c  
     where c.object_id = @table_id  
     and (type_name(c.system_type_id) not in ('xml'))  
     and c.name not in (select col_name from #colpostab where col_pos = 1)  
     and ((c.name in (select col_name from #colpostab)) or (@indexonly <> 'INDEXONLY'))
    -- populate temporary table of all (column, index position) tuples for this table  

It was pretty easy to spot. The weren’t checking to see if the column was computed so I added a line to the where clause.

and c.is_computed = 0

That’s it. One little check to see if it is a computed column. Now that I had fixed it I created a new procedure named sp_createstats_fixed in the master database. Just creating it in master doesn’t make it act like the original procedure or make it a system stored procedure. For that I had to execute EXECUTE sp_MS_marksystemobject ‘sp_createstats_fix’. This is an undocumented stored procedure and could change or go way any time. The only way to unmark it in SQL Server 2005 is to drop the procedure and recreate it. Now it acts just like the old procedure. Next I had to replace all references to the old proc with the new one. I made an entry into our bug tracking system about the change so we would have a record of what I did and why.

Conclusions

This wasn’t the most elegant solution. It could break later. The upside is it only took me about 30 minutes to fix and deploy versus the hours of re-coding and then testing that I would have had to do before. Do I think you should go around creating your own system stored procedures? Not at all. I don’t recommend you put anything in the master database period. If the problem had been more complex I would have redone the original routines to exclude the broken procedure. This time it just happened to be a very quick fix to a non-critical part of our system.

When Technical Support Fails You – UPDATE and Answers!

As promised and update on what has happened so far. A correction needs to be made. the P800 is a PCIe 1.0 card so the bandwidth is cut in half from 4GB/sec to 2GB/sec.

My CDW rep did get me in contact with an HP technical rep who actually knew something about the hardware in question and its capabilities. It was one of those good news, bad news situations. We will start with the bad news. The performance isn’t off. My worst fears were confirmed.

The Hard Disks

The HP Guy (changing the names to protect the innocent) told me their rule of thumb for the performance of the 2.5” 73GB 15K drives is 10MB/Sec. I know what you are thinking, NO WAY! But, I’m not surprised at all. What I was told is the drives ship with the on board write cache disabled. They do this for data integrity reasons. Since the cache on the drive isn’t battery backed if there was any kind of failure the potential for data loss is there. There are three measurements of hard disk throughput, disk to cache, cache to system and disk to system. Disk to cache is how fast data can be transferred from the internal data cache to the disk usually sequentially. On our 15k drive this should be on average 80MB/sec. Disk to system, also referred to burst speed, is almost always as fast as our connection type. Since we are using SAS that will be close to 250MB/sec. Disk to system is no caching at all. Without the cache several IO reordering schemes aren’t used, there is no buffer between you and the system, so you are effectively limited by the Areal Density and the rotational speed of the disk. This gets us down to 10 to 15 megabytes a second. Write caching has a huge impact on performance. I hear you saying the controller has a battery backed cache on it, and you would be right.

The Disk Controller

The P800 controller was the top of the line that HP had for quite a while. It is showing its age now though. The most cache you can get at the moment is 512MB. It is battery backed so if there is a sudden loss of power the data in cache will stay there for as long as the battery holds out. When the system comes back on the controller will attempt a flush to disk. The problem with this scheme is two fold. The cache is effectively shared across all your drives since I have 50 drives total attached to the system that is around 10.5 megabytes per drive. Comparable drives ship with 16 to 32 megabytes of cache on them normally. The second problem is the controller can’t offload the IO sorting algorithms to the disk drive effectively limiting it’s throughput. It does support native command queuing and elevator sorting but applied at the controller level just isn’t as fast as at the disk level.If I had configured this array as a RAID 6 stripe the loss of performance from that would have masked the other bottlenecks in the controller. Since I’ve got this in a RAID 10 the bottleneck is hit much sooner with fewer drives. On the P800 this limit appears to be between 16 and 32 disks. I won’t know until I do some additional testing.

Its All My Fault

If you have been following my blog or coming to the CACTUSS meetings you know I tell you to test before you go into production. With the lack of documentation I went with a set of assumptions that weren’t valid in this situation. At that point I should have stopped and done the testing my self. In a perfect world I would have setup the system in a test lab run a series of controlled IO workloads and come up with the optimal configuration. I didn’t do as much testing as normal and now I’m paying the price for that. I will have to bring a system out of production as I run benchmarks to find the performance bottlenecks.

The Good News

I have two P800’s in the system and will try moving one of the MSA70’s to the other controller. This will also allow me to test overall system performance across multiple PCIe busses. I have another system that is an exact duplicate of this one and originally had the storage configured in this way but ran into some odd issues with performance as well.

HP has a faster external only controller out right now the P411. This controller supports the new SASII 6G protocols, has faster cache memory and is PCIe 2.0 complainant. I am told it also has a faster IO processor as well. We will be testing these newer controllers out soon. Also, there is a replacement for the P800 coming out next year as well. Since we are only using external chassis with this card the P411 may be a better fit.

We are also exploring a Fusion-io option for our tempdb space. We have an odd workload and tempdb accounts for half of our write operations on disk. Speeding up this aspect of the system and moving tempdb completely away from the data we should see a marked improvement over all.

Lessons Learned or Relearned

Faced with the lack of documentation, don’t make assumptions based on past experiences. Test your setup thoroughly. If you aren’t getting the information you need, try different avenues early. Don’t assume your hardware vendor has all the information. In my case, HP doesn’t tell you that the disks come with the write cache disabled. They also don’t give you the full performance specifications for their disk controllers. Not even my HP Guy had that information. We talked about how there was much more detailed information on the EVA SAN than there was on the P800.

Now What?

Again, I can’t tell you how awesome CDW was in this case. My rep, Dustin Wood, went above and beyond to get me as much help as he could, and in the end was a great help. It saddens me I couldn’t get this level of support directly from HP technical support. You can rest assured I will be giving HP feedback to that effect. By not giving the customer and even their own people all the information sets everyone up for failure.

I’m not done yet. There is a lot of work ahead of me, but at least I have some answers.You can bet I’ll be over at booth #414 next week at PASS asking HP some hard questions!

When Technical Support Fails You

I have had the pleasure of being a vendor, and technical support for both hardware and software products. I know it isn’t easy. I know it isn’t always possible to fix everything. The level of support I’ve received from HP on my current issue is just unacceptable. This is made more frustrating by the lack of documentation. The technical documents show capacity. How many drives in an array, Maximum volume size but nothing on throughput.Every benchmark they have seems to be relative to another product with no hard numbers. For example, the P800 is 30% faster than the previous generation.

I’m not working with a complicated system. It’s a DL380 G5 with a P800 and two MSA70’s fully populated with 15k 73GB hard drives. 46 of them are in a RAID 10 array with 128k stripe. Formatted it NTFS with a 64k block size and sector aligned the partition. Read/Write cache is set at 25%/75%. This server originally just had one MSA70. We added the second for capacity expansion and expected to see a boost in performance as well. As you can probably guess, there wasn’t any increase in performance at all.

Here is what I have as far as numbers. Some of these are guesses based on similar products.

P800 using two external miniSAS 4x connectors maximum throughput of 2400 MB/sec (2400Mbit per link x 4 per connector x 2 connectors).
The P800 uses a PCIe x8 connection to the system at 4,000 MB/Sec (PCIe 2.0 2.5GHz 4GB/sec each direction).
Attached to the controller are 15k 73GB 2.5” hard drives 46 of them for a raw speed 3680 MB/Sec of sequential read or write speed (23x80MB/sec write sequential 2 MSA70’s RAID 10 46 Drives total based on Seagate 2.5 73GB SAS 15.1k)

Expected write speed should be around 1200 megabytes a second.

We get around 320 MB/Sec sequential write speed and 750MB/sec in reads.

Ouch.

Did I mention I also have a MSA60 with 8 7.2k 500GB SATA drives that burst to 600MB/sec and sustain 160MB/Sec writes in a RAID 10 array? Yeah, something is rotten in the state of Denmark.

With no other options before me I picked up the phone and called.

I go through HP’s automated phone system, which isn’t that painful at all, to get to storage support. Hold times in queue were very acceptable. A level one technician picked up the call and started the normal run of questions. It only took about 2 minutes to realize the L1 didn’t understand my issue and quickly told me that they don’t fix performance issues period. He told me to update the driver, firmware, and reboot. Of course none of that had worked the first time but what the heck, I’ll give it the old college try. Since this is a production system I am limited on when I can just do these kinds of things. This imposed lag makes it very difficult to keep an L1 just sitting on the phone for five or so hours on hold while they wait for me to complete the assigned tasks. I let him go with the initial action plan in place with an agreement that he would follow up.Twice I got automated emails that the L1 had tried to call and left voicemails for me. Twice, there were no voicemails. I sent him my numbers again just to be on the safe side. Next, I was told to run the standard Array Diagnostic Utility and a separate utility that they send you to gather all the system information and logs, think a PSSDiag or SQLDiag. After reviewing the logs he didn’t se anything wrong and had me update the array configuration utility. I was then told they would do a deeper examination of the logs I had sent and get back to me. Three days later I got another email saying the L1 had tried to call and left me a message. Again there was no voicemail on my cell or my desk phone. I sent a note back to the automated system only to find the case had been closed!

I called back in to the queue and gave the L1 who answered my case number, he of course told me it was closed. He read the case notes to me, the previous L1 had logged it as a network issue and closed the case. If I had been copying files over the network and not to another local array I can see why it had been logged that way. I asked to open a new case and to speak to a manager. I was then told the manager was in a meeting. No problem, I’ll stay on the line. After 45 minutes I was disconnected. Not one to be deterred, I called back again. The L1 that answered was professional and understanding. Again, I was put on hold while I waited for the manager to come out of his meeting. About 10 minutes later I was talking to him. He apologized and told me my issues would be addressed.

I now had a new case number and a new L1. Again, we dumped the diagnostic logs and started from the beginning. This time he saw things that weren’t right. There was a new firmware for the hard drives, a new driver for the P800, and a drive that was showing some errors. Finally, I felt like I was getting somewhere! At this point it has been ten days since I opened the previous case. We did another round of updates. A new drive was dispatched and installed. The L1 did call back and actually managed to ether talk to me or leave a message. When nothing had made any improvement he went silent. I added another note to the case requesting escalation.

That was eight days ago. At this point I have sent seven sets of diagnostic logs. Spent several hours on the phone. And worked after hours for several days. The last time I talked to my L1, the L2’s were refusing to accept the escalation. It was clearly a performance problem and they don’t cover that. The problem is, I agree. Through this whole process I have begged for additional documentation on configuration and setup options, something that would help me configure the array for maximum performance.

They do offer a higher level of support that covers performance issues, for a fee of course. This isn’t a cluster or a SAN. It is a basic setup in every way. The GUI walks you through the setup, click, click, click, monster RAID 10 array done. What would this next level of paid support tell me?

My last hope is CDW will be able to come through with documentation or someone I can talk to. They have been very understanding and responsive through this whole ordeal.

Thirty one days later, I’ve still got the same issue. I now have ordered enough drives to fill up the MSA60. The plan is to transfer enough data to free up one of the MSA70’s. Through trial and error, I will figure out what the optimum configuration is. Once I do I’ll post up my findings here.

If any of you out there in internet-land have any suggestions I’m all ears.

Cannot generate SSPI context…

Out of all the problems you can have with SQL Server troubleshooting connectivity issues can be the most challenging. When you factor in the complexities of Active Directory and SQL Server’s interaction with it fixing SSPI errors can be down right baffling.

At my company we are moving onto new hardware and along the way standardizing on SQL Server 2005 x64.Since this is all happening on new hardware I have the luxury of doing most of the work before we flip the switch. We had one migration under our belt and the second one was looking good when the SSPI came and decided to make sure I spent all my Saturday working.

I ran down my list of things to check:

Date out of sync with domain more than 30 minutes – Nope.

Bad DNS entry – Nope.

Miss configured service account – Nope.

Log on locally with Windows account – Yep.

Now I was stating to get frustrated and kind of worried that a roll back may be called for. I did what all good panicked DBA’s do, I searched the Internet. Most of it I had already tried and some of it just didn’t apply. It’s not often a solid web search is such a complete strike out. When all else fails, step back look at the problem as a whole and start from the beginning.

What do we know?

We can’t establish a trusted login between SQL Server and the domain.

The OS and domain are just fine. You can log in locally with a domain account and you can remotely access other server resources with a domain account.

SQL Server services start up just fine under a domain account.

You can log in locally to SQL Server but not remotely.

SQL authentication works just fine.

That pretty much leaves a configuration issue somewhere. Since we had set this server up it initially had a different name and IP address that would be changed to the old server name and IP address. There are alot of known problems with other parts of SQL Server like reporting services when you do this kind of rename but generally SQL Server is just fine.

You drop the old SQL Server name

sp_dropserver old_name
sp_addserver new_name, local

OR

sp_dropserver [old_nameinstancename] 
sp_addserver [new_nameinstancename], local 

Verify everything with these two queries

SELECT @@Servername 
SELECT * FROM [servername].msdb.dbo.sysjobs

If you have a problem with the @@Servername you get back nothing or the wrong server name. The four part select should return data if not you usually get this linked server is not configured for blah blah blah. Which means you may have forgotten the ,local part. After checking all of these things off again, we still had the issue! Now I really was stumped. I didn’t have a lot of other choices and time was running out. I just started going through EVERYTHING that was network related in the SQL Server configuration. Eventually, I saw the issue. The server IP was correct but the IP in the SQL Server Network Configuration was wrong! I’ve done a ton of renames like this and hadn’t encountered this particular setting not changing when the server IP changed. I reset it to the new server IP and just like magic SQL Server could authenticate to the domain. There was much cheering from my peers, ok not really but I was cheering. The server was up and everyone was able to get back to doing other things as the database faded into the background once again. I just keep telling myself that SQL Server one of the easiest RDMS’s on the market.

Migrating A SQL Server 2005 Reporting Services Named Instance, The Missing Manual

Microsoft has always been pretty good at one thing, ease on install.

One of the things I always says is the greatest thing about SQL Server is any idiot can install it. The other thing I also say is the worst thing about SQL Server is any idiot can install it.

This has carried over to the other fine products that ship with SQL Server. The one thing I’ve had pretty good luck using as a novice since it was introduced is SQL Server Reporting Services. As someone who hasn’t had the best experience over the years with crystal reports SSRS was a breath of fresh air.

One of the things I’ve never had to do though is move SSRS to a new server. To start with I did what I always do; I went to MSDN and read what I could find on moving or migrating to a new server. I was feeling pretty confident that I could pull this off.

I had backed up my databases and my key so I could import it on reinstall.

My first mistake was hubris that I could do what I thought was correct and second guess the documentation.

I installed SQL Server plus all the other services, including SSRS just like I was doing a clean install. The server didn’t start out with the same name and that was part of the problem. I copied all the files I needed from the old server and shut it down. Next we renamed the server and I renamed the SQL Server instance to its new name. Once that was done I restored the old SSRS databases and the encryption key. I fired up SSRS when to look at a report and got an odd error.

This version doesn’t support multiple instances.

My second mistake was assuming the documentation on MSDN was complete. After staring over and following the instructions there I ended up variations on the previous error, or the reports wouldn’t generate period.

What should have been a two hour migration was stretching into two days. I had to pull the trigger and bring the old hardware back on line.

Having a fall back plan saved us from having any impact on the business but it did set me back a week until the next available maintenance window was available.

I now had a whole week to research the issue and make sure I had a successful migration, so I got cracking.

I went to my buddy Mr. Google and asked him where the hell I messed up and if anyone else had come up against this. As I thought I wasn’t the first, but there was a twist it wasn’t just one thing it was a few things that I was bumping up against. Luckily between MSDN and some of the fine folks on the SQLServerCentral forums I was able to piece together a successful migration strategy.

The list of obstacles I had to overcome.

This was a named instance and the $ played havoc with the Reporting Services Configuration Manager.
This was a migration of existing data plus the security key to the server.
The server name was also being changed after install to reflect the old server name.

The instructions from MSDN tell you not to choose SSRS to be configured during the install with the default values.
This causes a key part not to be configured, IIS. This is a problem because without IIS configured you can’t view the reports!
This by its self isn’t an issue if you are working with a default instance of SQL Server. But, as I indicated, I’m dealing with a named instance.
After you have installed SQL Server and SSRS you are suppose to use the Reporting Services Configuration Manager to setup IIS. But, since the named instance has the $ in it, the Reporting Services Configuration Manager kicks back the virtual name with:

ReportServicesConfigUI.WMIProvider.WMIProviderException: The virtual directory specified is not valid. Make sure the specified name is not too long and doesn’t contain illegal characters. (example: 😉
at ReportServicesConfigUI.WMIProvider.RSReportServerAdmin.CreateVirtualDirectory(String virtualDirectory, String path)

Very nice, if you let the installer configure the site at install time it gladly puts the $ in the name for you like http://<servername>/ReportServer$<instancename>.

I had puzzled out the IIS part and gleaned the rest from Tim Benninghoff and his post on SSC.
Also, the bits about restoring the encryption key came from MSDN, none of the other articles talk about a named instance at all and assume it is a default instance you are moving.

So, I followed these steps to restore the IIS folder structure and move my instance of SSRS to the new server.
Remember, please validate any changes you make to your server, and always your success isn’t guaranteed just because you followed these instructions.


On the original source server:

  1. Back up the ReportServer database, the ReportServerTempDB database.
  2. Back up the encryption key by using the Reporting Services configuration Manager. It will ask you for a password to backup the key DON’T FORGET IT!
  3. Using the Internet Information Services Manger expand the Application Pools folder, right click on the ReportServer$<instancename> go to all tasks and Save Configuration File.
    I recommend giving it a .xml extension to make things easier on the restore side.
  4. Go to the Web Sites folder and usually under the Default Web Site there will be two entries with gears next to them one named Reports$<instancename> and the other ReportServer$<instancename>.
    Right click on them and save the configuration file like you did with the application pool.

On your new destination server:

  1. Stop IIS service by running the iisreset /stop command at the command prompt.
  2. Stop Reporting Services using Reporting Services Configuration Manager click Server Status, and then click Stop on the Report Server Status page. If Stop is unavailable, the service has already been stopped.
  3. Restore the backup databases from the source server.
  4. Import the IIS application pool configuration to IIS on the new server. From Internet Information Manager, right click on your Application Pools folder and select New>Application Pool (from file)… Use the Browse button to locate your .xml file you should see the new pool in the list.
  5. Import the IIS virtual directory configuration to IIS on the new server. From Internet Information Manager, right click on your Default Website and select New>Virtual Directory (from file)…
    Use the Browse button to locate your .xml files, and then click Read File Select the Virtual Directory name that appears in the white box and click the OK button.
    You should see your new Virtual Directory appear with the name Reports$<InstanceName> and ReportServer$<InstanceName>
  6. Modify the following registry settings under HKLMSoftwareMicrosoftMicrosoft SQL ServerMSSQL.(instance#)Setup:
    RSVirtualRootApplication – Reports$<InstanceName>
    RSVirtualRootApplicationPath – IIS://(ServerName)/W3SVC/1/Root/Reports$<InstanceName>
    RSVirtualRootServer – ReportServer$<InstanceName>
    RSVirtualRootServerPath – IIS://(ServerName)/W3SVC/1/Root/ReportServer$<InstanceName>
    Also, you may need to add the $<InstanceName> to the following keys if they don’t already exist:
    ApplicationPoolConfigured_RM
    ApplicationPoolConfigured_RS
  7. Modify the following .config files for Reporting Services:
    In C:Program FilesMicrosoft SQL ServerMSSQL.(instance#)Reporting ServicesReportManager, modify RSWebApplication.config so that the ReportServerVirtualDirectory node contains ReportServer$<InstanceName>
    In C:Program FilesMicrosoft SQL ServerMSSQL.(instance #)Reporting ServicesReportServer, modify rsreportserver.config so that the UrlRoot node contains http://(ServerName)/reportserver$<InstanceName>
  8. In the Reporting Services Configuration Manager confirm the new paths.
  9. Start Reporting Services from the Reporting Service Configuration Manager.
  10. Start IIS service by running the iisreset /start command at the command prompt.
  11. Remove the encryption key from the source server. I used the rskeymgmt utility to do this.
  12. Examine the encryption keys that exist. Based on the type of the instance of SQL Server 2005 installed on the destination server, you type the following in the command prompt window.
    Default instance
    Type rskeymgmt -l, and then press Enter
    Named instance
    Type rskeymgmt -l -i InstName, and then press Enter
    You will see two encryption keys listed. One is from the source server, and the other is from the destination server. The format of the encryption keys listed is as follows:
    ComputerNameInstNameEncryptionKeyID
    Note ComputerName is the name of the destination server.InstName is the placeholder for the name of the instance of SQL Server 2005 installed on the destination server. EncryptionKeyID is the placeholder for the ID of the encryption key.
  13. Delete the encryption key from the source server. Based on the type of the instance of SQL Server 2005 installed on the computer, you type the following in the command prompt window.
    Default instance
    Type rskeymgmt -r SourceEncryptionKeyID, and then press Enter
    Named instance
    Type rskeymgmt -i InstName -r SourceEncryptionKeyID, and then press Enter
    Note SourceEncryptionKeyID is the placeholder for the ID of the encryption key from the source server that is obtained in step 2.
  14. Repeat step 2. You will see that only the encryption key of the destination server exists now.
  15. Restore the backup encryption key from the source server.
    In the Reporting Services Configuration Manager, click Encryption Keys, and then click Restore in the Encryption Key page.
    In the Encryption Key Information window, type the password you use to back up the encryption key from the source server, locate the backup encryption key from the source server, and then click OK.
  16. Make sure that the same user account as the ASP.NET service account is set on the Rsreportserver.config file. To do this, follow these steps:
    In the Reporting Services Configuration Manager, click Web Service Identity. Notice the ASP.NET Service Account information.
    Open the SQLInstall:Program FilesMicrosoft SQL ServerMSSQL.XReporting ServicesReportServer folder. Right-click the Rsreportserver.config file, and then click Properties.
    Note SQLInstall is the placeholder for the drive where SQL Server 2005 is installed. MSSQL.X is the placeholder for the Instance ID of the instance of SQL Server 2005 Report Services that is running on the destination server. To obtain the Instance ID, click Server Status in the Reporting Services Configuration Manager, and then notice the Instance ID information. Usually it’s ID 3 if you are running multiple instances or ID 2 if not.
    In the Rsreportserver properties dialog box, click the Security tab.
    if the ASP.NET service account is not listed, add it. Make sure that the ASP.NET service account has Read and Read & Execute permissions.
    Click OK to close the Rsreportserver properties window.
  17. If you did not have an instance of SSRS already configured then you will need to execute the following script to add the RSExecRole and the appropriate permissions.
    --create RSExecRole role in master and assign permissions
    USE [master]
    GO
    CREATE ROLE [RSExecRole] AUTHORIZATION [dbo]
    GO 
    GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO RSExecRole 
    GO 
    GRANT EXECUTE ON master.dbo.xp_sqlagent_enum_jobs TO RSExecRole 
    GO 
    GRANT EXECUTE ON master.dbo.xp_sqlagent_is_starting TO RSExecRole 
    GO 
    --create RSExecRole role in msdb and assign permissions
    USE [msdb]
    GO
    CREATE ROLE [RSExecRole] AUTHORIZATION [dbo]
    GO
    -- Permissions for SQL Agent SP's 
    GRANT EXECUTE ON msdb.dbo.sp_help_category TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_add_category TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_add_job TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_add_jobserver TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_add_jobstep TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_add_jobschedule TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_help_job TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_delete_job TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_help_jobschedule TO RSExecRole 
    GO 
    GRANT EXECUTE ON msdb.dbo.sp_verify_job_identifiers TO RSExecRole 
    GO 
    GRANT SELECT ON msdb.dbo.sysjobs TO RSExecRole 
    GO 
    GRANT SELECT ON msdb.dbo.syscategories TO RSExecRole 
    GO 
    -- SQL Server 2005 requires that the user is in the SQLAgentOperatorRole 
    IF EXISTS (SELECT * FROM sysusers WHERE issqlrole = 1 AND name = N'SQLAgentOperatorRole'
    BEGIN 
        EXEC
    msdb.dbo.sp_addrolemember N'SQLAgentOperatorRole', N'RSExecRole' 
    END 
    GO

Once all that is done, if you had an instance already configured you may need to remove the virtual folders from the default website.

I also restarted everything again and tested it all out.

Good luck and I hope you find this useful; trying to glean all of this from different resources and separate what was really going on by the cryptic messages SSRS pumps out can be difficult in the best of times!