If I can get one person to not sign up with CenturyLink Prism or internet, this post is worth it.

I have called CenturyLink over a dozen times in the last few months to get my bill adjusted. Spending somewhere between 10-14 hours on hold.  When first signing up for CenturyLink, I was promised a fair price ($75 for 40 mb up/40 mb down). This was to include the cheapest (and unwatchable, Standard Definition) Prism TV package. I then found out it’s more like 40/10, but they don’t specify this in any contract and I foolishly believed the salesman when he affirmed it was 40/40. I sucked it up, because I hated Comcast so much that I blindly thought nobody could be worse.  I was so wrong.

I increased the connection speed to 100/20 (I believe it was 20, it may have been 10 because it was definitely not 100/100), for an additional $10/month. This quickly snowballed into an additional $20-$25/month after they pushed a new contract and added some more fees. I have now spent much more time and effort on crappy CenturyLink’s fiber internet than I ever imagined. I called every month to get my price adjusted, and they did — or they said they did.  They “put in the order”.  You see, CenturyLink doesn’t appear to allow its customer support on the phone to actually make changes to the account.  They have to request it, like an order form, and it goes to another department who actually approves it.  But this never happened to my account whenever I would get a fee removed or lower the price of something to a promotional offer.  I even received one of those $10/month promotions in the mail, called to have it applied to my account, and was told that it was.  Then I found out the following billing cycle that I wasn’t eligible for it. They don’t proactively tell you that they “cannot” fulfill an order the agent just committed to.

CenturyLink does not tell you when it disapproves of an “order” that customer support puts in.  You just have to find out when your bill does not reflect the change that they agreed to.  This is why it is 100% crucial you always record your calls with CenturyLink in case you want to fight them.

CenturyLink runs a bait-and-switch scam in my opinion.  The price that you are promised will definitely go up within your contract term, which leaves you with no option but to either suck it up like they want or cancel and pay their arbitrary Early Termination Fee.  I chose the latter, and am now going to fight to get it voided because I have recorded calls with them adjusting the price, telling me it was adjusted, and then hard evidence (the bill) that it never happened.

I’m pretty sure it is illegal to say you are going to be charged one thing, and then be charged much more. But I will probably be spending more time than it’s worth to fight this fee ($260, for 2 remaining months on my contract, which is more than my bill would have been had I stayed at the original 40/10 speed). So I pray that anybody that reads this, please do not sign up for CenturyLink, even if you hate Comcast as much as I do. I spent several times as much time on hold with CenturyLink than Comcast and I have nothing to show for it.  They will keep billing whatever they want to, and you will have to cancel to get any justice.  It’s just not worth your time.


I recently switched web hosts from Digital Ocean to OVH. I decided to go the Docker route and hopefully make migration less of a pain in the future. I run 4 websites off of the host, 3 of which are WordPress and 1 Koken site for photography. I don’t think it was less of a pain to do this, but I had to finish it once I started it.

I’m certain that this is NOT the optimal setup, but I could not find anyone who did this so hopefully it’s a jumping off point for someone with more patience and time than me. This is a waste of resources to have 3 separate WordPress Apache docker containers running simultaneously.

I chose to run a separate docker container for each website, and use the popular nginx Docker reverse proxy to serve all of them out at different “VIRTUAL_HOST” s. They are not setup for HTTPS yet.

I am going to use the following containers:
2x containers of the wordpress image with a small addition added in for the headers module in a dockerfile.
1x container of koken/koken-lemp
1x container mariadb (or you could use mysql). this will be where all the wordpress containers go to CRUD their databases.
1x container for the nginx reverse proxy

Let’s get to business now! So the first step is to get the custom wordpress container configured so that it will have the custom modules. You can skip this step if you don’t use any extra modules.

##dockerfile start
#taken from https://www.linux.com/learn/how-build-your-own-custom-docker-images
FROM wordpress
MAINTAINER DockerFan version 1.0
ENV DEBIAN_FRONTEND noninteractive
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
#enable headers
RUN a2enmod headers
##dockerfile end

Save this as dockerfile in a directory, and run
build -t wpapache .

Now you have an image of an up-to-date wordpress, along with a customization baked into it.

Next step: setup a docker-compose.yml file so you can get everything linked and setup with just one command.

##docker-compose.yml start
##run by docker-compose up -d
image: wpapache
- mysql
WORDPRESS_DB_USER: databaseuser
VIRTUAL_HOST: 'www.wordpressblog2.com,wordpressblog2.com'mysqldump -urootuserhere -p wordpressblog1 --single-transaction --routines --triggers > /web/mysql/backupwp1.sql
## note that you can have a continuing list of VIRTUAL_HOSTS if you have multiple domains or subdomains that will redirect to this container
- ""
#- ""
## You can create more ports here, for instance for HTTPS, but I commented it out for now
working_dir: /var/www/html
- /web/myblog1:/var/www/html/
image: wpapache
- mysql
WORDPRESS_DB_USER: databaseuser
VIRTUAL_HOST: 'www.wordpressblog2.com,wordpressblog2.com'
- ""
## notice the port is different for local host so that they do not conflict with the wpblog1
working_dir: /var/www/html
- /web/myblog2:/var/www/html/
#koken has its own mysql container built in, so there's no need to link to the mysql/mariadb container like the wordpress ones
image: koken/koken-lemp
VIRTUAL_HOST: 'www.kokensitehere.com,kokensitehere.com,photography,kokensitehere.com'
- ""
working_dir: /usr/share/nginx/www
restart: always
- /web/kokensitehere:/usr/share/nginx/www
- /web/kokensiteheremysql:/var/lib/mysql
## note the mysql volume is being stored on the local machine, so it should be easier to manage
image: mariadb:10.1
- ""
#- MYSQL_ROOT_PASSWORD=mysqlpasswordhere
#- MYSQL_DATABASE=mysqldatabase
## I commented the above env variables out because they are used to create a new database, and we are just migrating an existing one. still good to know though.
- /web/mysql:/var/lib/mysql
## in case you need to also get into mysql and import your SQL files with a GUI, you should uncomment this whole section.
# image: corbinu/docker-phpmyadmin
# links:
# - mysql
# ports:
# - 8080:80
# restart: always
# environment:
# MYSQL_ROOT_PASSWORD: mysqlpasswordhere
image: jwilder/nginx-proxy
- 80:80
- 443:443
- /var/run/docker.sock:/tmp/docker.sock:ro
##docker-compose.yml end

Now that you have docker-compose.yml configured based on your wordpress & koken mysql usernames/passwords and pointed at the location, you just have to run
docker-compose up -d

If all goes well they should be running now on the new server.
If you’re like me and have to edit the docker-compose file and re-run it a lot, a small bash script might help you delete all your Docker containers — WARNING — and recreate from the docker-compose.yml file:

##recreateContainers.sh - delete all docker containers, and run the docker-compose command.
ALL=$(docker ps -a -q)
docker stop $ALL
docker rm $ALL
docker-compose up -d

If you are having issues with the MySQL databases not connecting, you will have to do a mysqldump of the databases from the old mysql instance and import them into the new tables.

mysqldump -urootuserhere -p wordpressblog1 --single-transaction --routines --triggers > /web/mysql/backupwp1.sql

docker exec -i -t mysql_container bash
#now you are in the mysql container
mysql -urootuserhere -p wordpressblog1 < /var/lib/mysql/backup1.sql

Go directly to Customizations page in new window:
javascript:window.open($(‘#crmContentPanel iframe:not([style*=\”visibility: hidden\”])’)[0].contentWindow.Xrm.Page.context.getClientUrl() + “/tools/solution/edit.aspx?id=%7bfd140aaf-4df4-11dd-bd17-0019b9312238%7d#”);

Go to Solutions in new window:
javascript:window.open($(‘#crmContentPanel iframe:not([style*=\”visibility: hidden\”])’)[0].contentWindow.Xrm.Page.context.getClientUrl() + “/main.aspx?Origin=Portal&page=Settings&area=nav_solution”);

Go to Security in new window:
javascript:debugger;window.open($(‘#crmContentPanel iframe:not([style*=\”visibility: hidden\”])’)[0].contentWindow.Xrm.Page.context.getClientUrl() + “/tools/AdminSecurity/adminsecurity_area.aspx”);

Get a Solution’s Dependencies in new window (first open a Solution via Settings/Solutions/double-clicking a Solution):
javascript: try { var cont = frames[1] || frames[0]; window.open(cont.Xrm.Page.context.getClientUrl() + “/tools/dependency/dependencyviewdialog.aspx?objectid=” + cont.APP_SOLUTION_ID.substring(1,37) + “&objecttype=7100&operationtype=dependenciesforuninstall”); } catch (e) { if (console && console.log) { console.log(“Error occured: ” + e); }}

Get a Record’s ID from an open record:
javascript:var thisXrm; for (var i = 0; i<5; i++) { try {var wf = window.frames; if (wf[i] && wf[i].Xrm && wf[i].Xrm.Page && wf[i].Xrm.Page.data != null) {thisXrm = wf[0].Xrm; var id = thisXrm.Page.data.entity.getId(); if (id) { clipboardData.setData(“Text”, id.toString()); alert(id); } else { alert(‘No id yet!’); } }} catch (e) { }}

Get a form’s type code from an open record:
javascript: var typeCode = frames[0].Xrm.Page.context.getQueryStringParameters().etc; if (typeCode) { clipboardData.setData(“Text”, typeCode.toString()); alert(typeCode); }

Setting a value on a rollup field is impossible in JavaScript in CRM 2015. You can’t refresh the field individually without clicking the Refresh button — which can be a pain for users if you have several rollup fields to recalculate.

function setValueOnRollup(field, value) {
//unsupported way of putting a value in the calculated field
//DOES NOT update the value in CRM, only on the display
$('#' + field).find('.rollup').find('span')[0].textContent = value;

So if you wanted to use it on a form, put setValueOnRollup(‘yourfieldhere’, ‘100.00’)

It’s best used when you have the values you want to show (preferably the real values) which can be retrieved with a query you can make with CRM REST Builder (https://crmrestbuilder.codeplex.com/).

These also might come in handy to manage the rollups and processes:

Process.js – CRM 2013/2015 Call Action, Workflow, or Dialog from JavaScript (https://processjs.codeplex.com/) is a cool tool to run a process from your custom JavaScript.

Dynamics CRM 2016 Workflow Tools (https://msdyncrmworkflowtools.codeplex.com/), specifically the “Force Calculate Rollup Field” feature to update the value in CRM.

If you’ve ever had this error pop up when deleting a managed solution on your CRM 2015 box:

The Main() component cannot be deleted because it is referenced by 1 other components. For a list of referenced components, use the RetrieveDependenciesForDeleteRequest.

You should use this to quickly get the dependency information:

First, go to the solution’s URL (will be like https://CONTOSO.CO/COCO/tools/solution/edit.aspx?id=%7b2BE0D3AD-DF66-4747-8AA0-A5BA16B146D3%7d

Then run this bookmarklet, which I call GetSolutionDependencies:

javascript: try { var cont = frames[1] || frames[0]; window.open(cont.Xrm.Page.context.getClientUrl() + "/tools/dependency/dependencyviewdialog.aspx?objectid=" + cont.APP_SOLUTION_ID.substring(1,37) + "&objecttype=7100&operationtype=dependenciesforuninstall"); } catch (e) { if (console && console.log) { console.log("Error occured: " + e); }}

It will extract the id for that solution, then go to a page in your CRM org to show more details.  Only tested in IE11 so the context may differ of course.

I recently had a hard drive failure while transitioning to an aufs/snapraid configuration using OMV on my Proxmox box.  I ended up losing about 3TB of data, of which 800GB was not backed up on another device in my network. Ouch.  Fortunately I had Crashplan Pro running for the past few years and backing up everything.

I first started using the Crashplan Restore option in my Windows 8 box but was constrained to having it run at between 6 – 10 mbps, which was estimated to take over a week to restore all of the data. I spoke with Crashplan support and they said this was normal as I was sharing the line and resources with other users.  Fine, I thought, but I don’t want to keep my primary Windows computer on for days on end when I could just setup one of my VMs on my headless server to do the work.  I had left it running for a few days already but only received about 200GB of data.

Surprisingly when I setup my OMV box to use Crashplan’s GUI by installing LXDE and the Crashplan application my download speeds skyrocketed to saturate my entire 50mbps line! I was surprised to see over 500GB downloaded in a single day through my OMV box.  I thought I would share this for those who might be restoring a huge quantity of data but complain about slow speeds on their Windows box.  I wasn’t able to resume the existing progress from my Windows box — so I ended up overwriting the 200GB that Crashplan had already downloaded in Window — but the speed more than made up for it.

I dedicated 3 CPUs and 8GB of RAM to my OMV box while it’s doing this and it’s eating up about 30-50% of my CPU while it does this.  Not too shabby!

For more information on restoring from Crashplan Pro — a service which I highly recommend — is here: https://support.code42.com/CrashPlan/Latest/Restoring/Restoring_Files_From_The_CrashPlan_App

Here is OMV (Open Media Vault) information main page: http://www.openmediavault.org/

I’ve been tinkering with Proxmox VE for holding my Linux NAS and media file server (OpenMediaVault [OMV]) in addition to some other Linux containers.  As OMV would be the only VM to access some of these disks I scoured the web for how to add a physical disk to a VM (http://forum.proxmox.com/archive/index.php/t-6192.html).

The problem with using /dev/sdX to pass through is that oftentimes the hard drives will change their X value (sometimes my primary will be on /dev/sda and othertimes /dev/sdg).  I couldn’t find an easy way to point it to the disk itself rather than its symbolic link that didn’t involve messing with udev & udevadm in the Proxmox forums or documentation, but I did discover that you can point QM (in the vm.conf file such as 101.conf, 102.conf, etc.).

Basically, do this in the Proxmox Console or an SSH session:

ls -l /dev/disk/by-id
# Look for the hard drive disk (not the partitions which will be appended with part-1 and so forth).
#In my example I found scsi-SATA_ST5000VN000-1H4_Z111111
#Then go into your vm.conf file (i.e. nano /etc/pve/qemu-server/101.conf) and add it manually to the device type and number that you want to passthrough.
#Here is what you would add to have it on the 6th SCSI device (scsi5)
scsi5: /dev/disk/by-id/scsi-SATA_ST5000VN000-1H4_Z111111
#Save the file and you should now see the disk when you look at the VM in Proxmox.

Such an easy solution but it took me a few hours of messing with udev to figure out.

Here’s a little snippet I authored this morning, before my cup of coffee, to query the CRM SQL database (YourOrg_MSCRM) for a list of web resources in the order of last ModifiedOn:

SELECT ModifiedOn, Name, DisplayName, OrganizationIdName, WebResourceType
FROM WebResource --table for web resources
WHERE IsCustomizable = 1 -- just for custom web resources

Anders Fjeldstad posted a neat JS shim on GitHub that adds in the ability to make CRM 2013 notifications in CRM 2011.

You can wrap it in a function that calls during your form’s CRM onLoad event.  That’s it!  Then you can make notifications appears easily:

Xrm.Page.ui.setFormNotification('This is an error',1,101); //Error (signified by the 1, with messageId 101)
Xrm.Page.ui.setFormNotification('This is a warning',2,102); //Warning (signified by the 2, with messageId 102)
Xrm.Page.ui.setFormNotification('This is informational',3,103); //Informational message (signified by the 3, with messageId 103)

Removing a notification is the same as in CRM 2013:

Xrm.Page.ui.clearFormNotification(101); //this clears messageId 101
Xrm.Page.ui.clearFormNotification(102); //this clears messageId 102
Xrm.Page.ui.clearFormNotification(103); //this clears messageId 103

Get it here on GitHub.

I found quite a gem today while browsing the Microsoft Developer blogs for help with debugging JavaScript in CRM 2011. It can be a pain to be updating JavaScript in Dynamics CRM 2011 without shelling out extra money for Visual Studio plug-ins to simplify it.  If you’re new to JavaScript in CRM 2011, this video from Marc Schweigert should help get you started with setting up Visual Studio to do the menial work of publishing web resources and debugging them.

Found at EUREKA: F5 debugging of CRM JavaScript web resources

▶ F5 debugging of CRM JavaScript web resources – YouTube.