Lately, I've been seeing some people have issues when trying to use BAT on CUCM. When you open the excel file you have macros and all that jazz needed to start whipping up a spreadsheet for import. Normally this works without issue but now that systems are going pure 64-bit, there is really no need to make 32-bit applications.
So what is the problem you might ask? The biggest issue is that Cisco has not developed their BAT file to work on 64-bit Excel. This can cause headaches as large phone deployments suck without BAT. Fear not however, there is a work around that a clever gentlemen on another website had posted to fix the issue. The only problem with this solution is that you have to keep making the change to the visual basic file to get everything to work each time you open a BAT sheet.
Go here: 64-bit Excel Fix
Again, this isn't my work but a solution I found and posted on the Cisco support forums when the issue came up. It makes life easy because hand jamming 283 phones sucks and you will be in a hand brace the next day if you go that route. Oh and by the way, I passed my CIPT1 exam. Two left and I am preparing to take both of them back to back.
Search This Blog
Friday, May 29, 2015
Tuesday, May 26, 2015
CIPT1 again
Tomorrow marks my second attempt at CIPT1. If I pass, I'll go schedule CIPT2 since I've at this point combined study on both. Assuming I pass, that would leave two exams left before the October deadline, and none of the rest are going to be an issue. Ideally, I'm CCNP Voice by the end of June or beginning of July. Then I can take the CIPTV2 exam which is the new 10.6(10.5(2)) track and convert to a CCNP Collab cert. Here goes to hoping I pass and get one exam closer to my goal so I can move onto other things.
Wednesday, May 20, 2015
The Telepresence Content Server
So the possible prospect of doing a greenfield install for a customer needing telepresence is here. Now with my time at Cisco, I know telepresence fairly well and added that to my UC suite of knowledge but some of the additions required, not so much. Recording and streaming might seem like a simple task but it is anything but that with the current documentation or lack thereof.
See the problem is that Cisco's current documentation is slightly incorrect and out of date. vBrick is the new partner with Cisco and their product is replacing the Show N Share. Before, you had a TCS (Telepresence Content Server), an MXE, and then a Show N Share plus whatever else you needed on top of that. These days, at least as of 6 months ago, vBrick REV and vBrick DME is now the standard.
I had asked the question if TCS in itself could stream live and recorded video via a web portal. Now regardless of what your sales rep at Cisco may say, the answer is it can do both with the limitation of up to 100 users. So for a small environment, this would be acceptable but obviously doesn't scale well. The next issue is you need somewhere to offload this material to anyway and that is where the DME comes in. The other option is a Wowza server but since Cisco is now partnered with vBrick, that solution isn't really put out there but it is supported.
The vBrick DME is essentially a storage server and transcoder/transrater with the ability to feed streams to a video portal / control tower of some sort for others to view. This device used to be a hardware device but now runs as a VM on a C220 M4 server. So with the current picture we have a TCS which live records and streams straight to the DME server for storage and whatever else you may need to do. Our current diagram looks like this:
As you can see, the telepresence endpoints both connect to the Cisco TCS for live recording. A screen comes up letting them know they are about to be recorded and then they are on their way. Everything recorded is sent straight to the DME via SIP or H.323. Regardless of what the Cisco documentation says, SIP is completely supported on the TCS. The documents are wrong as I sent an email to Cisco as well as on their support forums and got the "We know and are working on fixing the documentation" answer. This is good news actually, because H.323 by itself would have made things difficult by registering to a VCS and then needing to use traversal licenses.
The next part of the equation is the REV. Think of the REV as a YouTube portal or distribution server. Basically, it takes the videos on the DME and makes them presentable to a wide range of people. The basic license from what I see from vBrick starts at 1000 users. You setup users or public access, give a link and the end user can watch a recorded or live stream. Permissions, polling, comments, it's all there on the vBrick REV. Statistics like browser, traffic, etc, are also all present. The confusing part? The REV can talk directly to the TCS if it wanted to, you just wouldn't have any mass storage. So the diagram turns into a triangle and looks like this now:
Overall, it is a simple design actually, it's just learning the equipment and how to install it is the tricky part. The REV luckily is easy as it is mainly cloud based. On-premises solutions are involved and expensive and from what I was told, aren't needed unless you are a big corporation. The final piece to the puzzle is that you will need an MCU of some sort. The TPS or vTS (virtualized telepresence server) will let you host multipoint meetings. This is required since the TCS is part of the meeting as it is considered an endpoint and thus the meeting would become a 3 way meeting. Any telepresence meeting beyond just two endpoints requires an MCU of the sort. So just to tie it all together, at the bottom is the final topology. I hope some of this information has demystified this whole situation because it sure confused me.
See the problem is that Cisco's current documentation is slightly incorrect and out of date. vBrick is the new partner with Cisco and their product is replacing the Show N Share. Before, you had a TCS (Telepresence Content Server), an MXE, and then a Show N Share plus whatever else you needed on top of that. These days, at least as of 6 months ago, vBrick REV and vBrick DME is now the standard.
I had asked the question if TCS in itself could stream live and recorded video via a web portal. Now regardless of what your sales rep at Cisco may say, the answer is it can do both with the limitation of up to 100 users. So for a small environment, this would be acceptable but obviously doesn't scale well. The next issue is you need somewhere to offload this material to anyway and that is where the DME comes in. The other option is a Wowza server but since Cisco is now partnered with vBrick, that solution isn't really put out there but it is supported.
The vBrick DME is essentially a storage server and transcoder/transrater with the ability to feed streams to a video portal / control tower of some sort for others to view. This device used to be a hardware device but now runs as a VM on a C220 M4 server. So with the current picture we have a TCS which live records and streams straight to the DME server for storage and whatever else you may need to do. Our current diagram looks like this:
As you can see, the telepresence endpoints both connect to the Cisco TCS for live recording. A screen comes up letting them know they are about to be recorded and then they are on their way. Everything recorded is sent straight to the DME via SIP or H.323. Regardless of what the Cisco documentation says, SIP is completely supported on the TCS. The documents are wrong as I sent an email to Cisco as well as on their support forums and got the "We know and are working on fixing the documentation" answer. This is good news actually, because H.323 by itself would have made things difficult by registering to a VCS and then needing to use traversal licenses.
The next part of the equation is the REV. Think of the REV as a YouTube portal or distribution server. Basically, it takes the videos on the DME and makes them presentable to a wide range of people. The basic license from what I see from vBrick starts at 1000 users. You setup users or public access, give a link and the end user can watch a recorded or live stream. Permissions, polling, comments, it's all there on the vBrick REV. Statistics like browser, traffic, etc, are also all present. The confusing part? The REV can talk directly to the TCS if it wanted to, you just wouldn't have any mass storage. So the diagram turns into a triangle and looks like this now:
Overall, it is a simple design actually, it's just learning the equipment and how to install it is the tricky part. The REV luckily is easy as it is mainly cloud based. On-premises solutions are involved and expensive and from what I was told, aren't needed unless you are a big corporation. The final piece to the puzzle is that you will need an MCU of some sort. The TPS or vTS (virtualized telepresence server) will let you host multipoint meetings. This is required since the TCS is part of the meeting as it is considered an endpoint and thus the meeting would become a 3 way meeting. Any telepresence meeting beyond just two endpoints requires an MCU of the sort. So just to tie it all together, at the bottom is the final topology. I hope some of this information has demystified this whole situation because it sure confused me.
Monday, May 18, 2015
The final SD card and CIMC Post
At last, we are here with the final post on the CIMC and SD card fiasco that has riddled me with holes. Over the weekend I finished up the SD card project in which I needed to install 2 SD cards in a UCS C200 M3 server. You might think this would require a simple amount of time to accomplish but it has been nothing but simple. Cisco's document hasn't been updated to reflect that 32 GB SD cards are now supported. How did I find this out you may ask? I contacted TAC when I had a problem, the new SD cards were 32 GB and the old end of sale ones were 16 GB.
This might not seem like a problem at first, in fact, I didn't even consider that they were different as I just dropped them in and went about my way only to find out that RAID 1 was not going to work. For those of you that don't know, RAID 1 is a mirrored configuration of two disks. This means that one disk is identical to the other and if one fails, it isn't a big deal. Well, two different size disks means no RAID 1. So I called TAC to see if there was a way I could get around this and they didn't have an answer for me. I asked them why they thought it was acceptable to have 32 GB cards only now and still didn't get an answer.
I pondered for a bit, stewed for a bit, and then came up with a brilliant solution. Let's take the existing 16 GB SD card from the sub and put it in the pub! This means I got two 16 GB SD cards in the pub and two 32 GB SD cards in the sub. As soon as I did that I was able to synchronize the SD cards in the CIMC but I still got a "degraded RAID". What gives you might ask? Well, you still need to run the SCU ISO for format the SD cards which will let you put ESXi on them. Now this sounds easy and it really is but I had quoted 3.5 hours of labor and ended up with 13.5 because of issue after issue. First, the SD card size mismatch, second getting someone to go out there and spend an hour and a half between two sites pulling SD cards and me shutting down and turning on VMs and servers. Second, the TAC case took some time, and third the actual process of getting the SCU uploaded and ran takes forever.
No where in Cisco's documentation does it say that an SD card RAID array can take between 1-2 hours each! I had assumed since they were flash devices, this would be done in 10-20 minutes but no, the first 16 GB array took upwards to an hour while the 32 GB array took about an hour and a half. What makes it worse, it said it was done and wasn't. There is a pop-up message on the SCU that states the RAID array is done building and you can reboot. Well, the first time on the 16 GB card this went off without a hitch so I rebooted the machine and all the VMs came back up no problem. The 32 GB server lied to me. I rebooted, checked the CIMC and bam "RAID Rebuilding" then "RAID Degraded" because I rebooted and it wasn't done. Doh!
So I started the long process all over again. By the time I was done rebuilding the array it was 2am. Granted most of this was me just checking on it time to time and not really doing anything but still. When it said the RAID array was complete, I checked CIMC before I rebooted and it had actually completed this time around. Now why it didn't work the first time I don't know but it did this time. Lesson learned, if you plan on using the SD cards, get some chips, a computer game, and keep tabs on it while it rebuilds the RAID array.
I think at this point my blog is the most comprehensive guide to the entire CIMC and SD card setup. There is more information here than in the Cisco docs with the exception of the step by step guides that are helpful but don't tell you all the small things. It has been a long few months getting this done, but it has been an experience, one that I will remember and remind customers that the SD cards are a useless add-on that you don't need.
This might not seem like a problem at first, in fact, I didn't even consider that they were different as I just dropped them in and went about my way only to find out that RAID 1 was not going to work. For those of you that don't know, RAID 1 is a mirrored configuration of two disks. This means that one disk is identical to the other and if one fails, it isn't a big deal. Well, two different size disks means no RAID 1. So I called TAC to see if there was a way I could get around this and they didn't have an answer for me. I asked them why they thought it was acceptable to have 32 GB cards only now and still didn't get an answer.
I pondered for a bit, stewed for a bit, and then came up with a brilliant solution. Let's take the existing 16 GB SD card from the sub and put it in the pub! This means I got two 16 GB SD cards in the pub and two 32 GB SD cards in the sub. As soon as I did that I was able to synchronize the SD cards in the CIMC but I still got a "degraded RAID". What gives you might ask? Well, you still need to run the SCU ISO for format the SD cards which will let you put ESXi on them. Now this sounds easy and it really is but I had quoted 3.5 hours of labor and ended up with 13.5 because of issue after issue. First, the SD card size mismatch, second getting someone to go out there and spend an hour and a half between two sites pulling SD cards and me shutting down and turning on VMs and servers. Second, the TAC case took some time, and third the actual process of getting the SCU uploaded and ran takes forever.
No where in Cisco's documentation does it say that an SD card RAID array can take between 1-2 hours each! I had assumed since they were flash devices, this would be done in 10-20 minutes but no, the first 16 GB array took upwards to an hour while the 32 GB array took about an hour and a half. What makes it worse, it said it was done and wasn't. There is a pop-up message on the SCU that states the RAID array is done building and you can reboot. Well, the first time on the 16 GB card this went off without a hitch so I rebooted the machine and all the VMs came back up no problem. The 32 GB server lied to me. I rebooted, checked the CIMC and bam "RAID Rebuilding" then "RAID Degraded" because I rebooted and it wasn't done. Doh!
So I started the long process all over again. By the time I was done rebuilding the array it was 2am. Granted most of this was me just checking on it time to time and not really doing anything but still. When it said the RAID array was complete, I checked CIMC before I rebooted and it had actually completed this time around. Now why it didn't work the first time I don't know but it did this time. Lesson learned, if you plan on using the SD cards, get some chips, a computer game, and keep tabs on it while it rebuilds the RAID array.
I think at this point my blog is the most comprehensive guide to the entire CIMC and SD card setup. There is more information here than in the Cisco docs with the exception of the step by step guides that are helpful but don't tell you all the small things. It has been a long few months getting this done, but it has been an experience, one that I will remember and remind customers that the SD cards are a useless add-on that you don't need.
Thursday, May 14, 2015
Telepresence and design
Recently I was brought into a telepresence design for a customer. Having prior experience with almost all endpoints and servers, this seemed like a good opportunity to apply what I know since I used to previously work in the RMS NOC at Cisco for telepresence.
Basically, a large room needs a multi camera setup and we can't use the TX or IX endpoints since the room is going to be integrated and not setup with a desk. SX endpoints do this well as you can drop a codec, hook up a camera or three then composite the layout to fit the screen deployment you want. For instance, an SX80 with three cameras can be deployed to view three different locations of a room then you setup the layout how you want so that you can see all three locations at the same time. Active speaker is still a option but I would just lay it out in fashion where you can see everything and no screen switching happens from a Jabber clients perspective.
I know this post is a bit all over the place but I wanted to put this info out for those looking for a solution that doesn't need or want a single desk with a camera cluster. SX10s can easily integrate with an existing flatscreen, SX20s are great single camera solutions where you want to use a larger screen that Cisco can provide, then the SX80s are beast and can hold up to three cameras via HDMI.
The other options out there are DX endpoints but those are primarily desktop endpoints. DX70s and DX80s are the EX replacements so to speak. They run android and don't need a tablet to operate. The DX650 is a phone replacement with a large screen for video. These are ideal for video sitautions where you don't need a DX70/80 but do it enough to warrant a bigger screen than a 9900 series.
Basically, a large room needs a multi camera setup and we can't use the TX or IX endpoints since the room is going to be integrated and not setup with a desk. SX endpoints do this well as you can drop a codec, hook up a camera or three then composite the layout to fit the screen deployment you want. For instance, an SX80 with three cameras can be deployed to view three different locations of a room then you setup the layout how you want so that you can see all three locations at the same time. Active speaker is still a option but I would just lay it out in fashion where you can see everything and no screen switching happens from a Jabber clients perspective.
I know this post is a bit all over the place but I wanted to put this info out for those looking for a solution that doesn't need or want a single desk with a camera cluster. SX10s can easily integrate with an existing flatscreen, SX20s are great single camera solutions where you want to use a larger screen that Cisco can provide, then the SX80s are beast and can hold up to three cameras via HDMI.
The other options out there are DX endpoints but those are primarily desktop endpoints. DX70s and DX80s are the EX replacements so to speak. They run android and don't need a tablet to operate. The DX650 is a phone replacement with a large screen for video. These are ideal for video sitautions where you don't need a DX70/80 but do it enough to warrant a bigger screen than a 9900 series.
Wednesday, May 6, 2015
The CIMC KVM Fiasco
I was doing yet another upgrade for the same customer on their CIMC and BIOS today. This time it was on the subscriber because an SD card was not seen when I had put it in slot 2. I know for a fact the card is good because both SD cards worked in slot 1 and were seen. I pushed the upgrade from 1.4(7) to 1.5(7e) and this resolved the problem. I still step upgraded to 1.5(4) since those where the instructions from TAC the last time I did this but I ended up running into the infuriating Java problems yet again! The error, Unable to launch application via the KVM no matter what I did.
So long story short, on the 1.5 train for CIMC, the KVM does something differently within the CIMC and the newer versions of Java crap a brick over it and fail to launch the program due to some certificate issue that you can't seem to delete. The fix for this is to install Java 7, plain old base Java 7. Additionally, make sure it's 32-bit so if you use IE you don't get hosed. Once I did this, I was good to go again and was on my way to finishing the step upgrade and the SD cards are now seen and good to go.
So here I am eating some pretzels waiting on this thing to finish. I have to load the SCU next and format the flash cards with the appropriate data in a RAID-1 configuration except...I can't. You see, Cisco seems to have end of life'd the 16 GB SD Cards originally in there. Now I don't know if there is a direct replacement but our inside sales team ordered the 32GB ones Cisco instructed them to. Well that's all great and dandy for the future but it sucks now because I can't RAID-1 two different size SD cards. My solution, duplicate the install on both cards since all they hold is firmware anyways and deal with the "Degraded RAID" message that loves to sit there and stare you in the face. I'm going to go back and eat some more pretzels now, see y'all tomorrow!
So long story short, on the 1.5 train for CIMC, the KVM does something differently within the CIMC and the newer versions of Java crap a brick over it and fail to launch the program due to some certificate issue that you can't seem to delete. The fix for this is to install Java 7, plain old base Java 7. Additionally, make sure it's 32-bit so if you use IE you don't get hosed. Once I did this, I was good to go again and was on my way to finishing the step upgrade and the SD cards are now seen and good to go.
So here I am eating some pretzels waiting on this thing to finish. I have to load the SCU next and format the flash cards with the appropriate data in a RAID-1 configuration except...I can't. You see, Cisco seems to have end of life'd the 16 GB SD Cards originally in there. Now I don't know if there is a direct replacement but our inside sales team ordered the 32GB ones Cisco instructed them to. Well that's all great and dandy for the future but it sucks now because I can't RAID-1 two different size SD cards. My solution, duplicate the install on both cards since all they hold is firmware anyways and deal with the "Degraded RAID" message that loves to sit there and stare you in the face. I'm going to go back and eat some more pretzels now, see y'all tomorrow!
SQL CDR Pulls
I was given the task of pulling CDRs for an entire month almost 2 months ago for a fairly high call volume directory number. If you haven't done this before, CUCM provides a CDR pull under the serviceability section. The problem, it trunicates the results to 100 so you end up having to run 50 million pulls. The quick and easy way to do this is to simply pull a SQL query but this can be intensive on CUCM. Be sure to do this during off peak hours just in case you botch something and send CUCM into a spiral and need to reboot. There are two different ways to run CDRs based on version.
Pre 8.x Version
run sql car select * from tbl_billing_data where originalcalledpartynumber like '3126'
8.x and Forward:
run sql car select datetimestampconnect, duration, callingpartynumber, finalcalledpartynumber from tbl_billing_data where callingpartynumber = '3126'
Keep in mind the above are just examples. You don't need all those fields if you don't want them but I included ones that I happen to use. Make sure you set your terminal length to something like 20000 just in case some of the results go out of bounds of the window. Once you get that information, copy and paste it into a text file. From here, import that data into Excel via text file. If you don't do it in this fashion, Excel will not format it for you based off of spaces or commas and you end up with a mess.
Now, here is the next kicker. The times are not in common format so you may see something like this:
What is ampconnect or amporigination in actual time? If you click the field, the text box at the top of the sheet will show you the real time. I believe these times are in Epoch time and need to be converted. Just highlight all the times and format the cells to "Time" and bam, done! An easier way to deal with all of this is to just buy a third party billing program but if you must do it the hard way, this is how it's done.
Pre 8.x Version
run sql car select * from tbl_billing_data where originalcalledpartynumber like '3126'
8.x and Forward:
run sql car select datetimestampconnect, duration, callingpartynumber, finalcalledpartynumber from tbl_billing_data where callingpartynumber = '3126'
Keep in mind the above are just examples. You don't need all those fields if you don't want them but I included ones that I happen to use. Make sure you set your terminal length to something like 20000 just in case some of the results go out of bounds of the window. Once you get that information, copy and paste it into a text file. From here, import that data into Excel via text file. If you don't do it in this fashion, Excel will not format it for you based off of spaces or commas and you end up with a mess.
Now, here is the next kicker. The times are not in common format so you may see something like this:
amporigination | datetimest | ampconnect | datetimest | ampdisconnect |
============== | ========== | =========== | ========== | ============= |
56:26.0 | 3/2/2015 | 56:39.0 | 3/2/2015 | 56:41.0 |
16:31.0 | 3/2/2015 | 16:34.0 | 3/2/2015 | 18:47.0 |
44:12.0 | 3/2/2015 | 44:16.0 | 3/2/2015 | 50:16.0 |
03:00.0 | 3/2/2015 | 03:03.0 | 3/2/2015 | 08:19.0 |
What is ampconnect or amporigination in actual time? If you click the field, the text box at the top of the sheet will show you the real time. I believe these times are in Epoch time and need to be converted. Just highlight all the times and format the cells to "Time" and bam, done! An easier way to deal with all of this is to just buy a third party billing program but if you must do it the hard way, this is how it's done.
Subscribe to:
Posts (Atom)