Cloud Director | How it works with VM Snapshot

Hello Guys,

Hope all are healthy and safe and enjoying the present. If you are healthy and safe and still not enjoying the life then start enjoying it :) This was today's Gyan!

I wish that as VMware has this snapshot feature, life also should have this feature. When you feel that I am most content, successful and happiest then take snapshot and save it somewhere and in adverse days, revert the snapshot. Unfortunately, it doesn't happens. Because whatever happens in life, happens for once and get implemented permanently. No more Gyan! let's start the main discussion :D

You will be surprised to know that this behavior has not documented by VMware either in form of KB article or documentation. There is one good documentation from Tomas Fojta which explain it but I thought to showcase this behavior in bit explained way and with snippets. Before demonstrating it with snippets, I would like to share that

1. vCenter works differently than vCD in terms of snapshots
2. When you take snapshot of any VM in vCloud Director, vCD reserve the size, equal to total size of all disks in its allocation table
3. Actual storage usage by snapshot in datastore will still increase as per snapshot definition in vcenter server.
4. In vcenter, if you take snapshot of 100 GB VM then snapshot size might be in few MBs
5. In vCD, if you take snapshot of 100 GB VM then snapshot size still will be in few MBs but vCD allocation table reserve the size equal to total size of VM. If it sounds weird to you then continue reading it
6. Thin-provisioning or thick-provisioning doesn't change the snapshot behavior in vCD
7. It does mean that if you have any VM of size 1 TB and if you want to take snapshot then make sure you have 1 TB free space in allocation given to your OrgvDC else you will have to increase the allocation quota.

I made a test and below are the details-

vCD version - 9.7 (Versin doesn't matter)
VM hdd size - 40 GB
VM MEM Size - 2 GB

When I created this VM, used allocation of OrgvDC storage quota changed from 0GB to 42 GB (40 GB disk and 2 GB memory). See below image. Allocated space is 200 GB

Now, I took the snapshot with MEM and below is the modified allocation usage. It means that 40 GB actual hdd size + 2 GB MEM size + 40 GB storage allocated for snapshot by vcd in allocation table.

In vcenter, snapshot size still will be in few MBs.
Why it happens because vcd can't know that how much your snapshot can grow. So, it use the logic that your snapshot can grow up to equal size of your HDD size hence block the same storage in advance.

8. I took snapshot without memory as well but allocation was till 82 GB. I think that "might be" because we still need to reserve the space to potentially be able to suspend the VM. 

Conclusion is that, snapshot reserve the space equal to hdd(s) size and for mem it doesn't bother.

9. Last but not the least, you can take one snapshot if any VM which is hosted on vCD. In vCenter you can take many snapshots of a VM on different stages but in vCD, if you take second snapshot then first snapshot will automatically delete.

I hope this information will be helpful for you. Must comment if any doubt\query.


vROPS | Cannot share dashboard with my account

Hello Folks,

However, today's post is giving solution for one issue but it will open up your mind to hunt for solution of many issues. If not then it will help you to understand roles and permission of vROPS with realtime example.

So, story is like, I was working on vRops and noticed that I couldn't share my dashboard which I created with my domain account. Hope you know how to share any dashboard created in vROPS. If not then refer to below image. Click on highlighted icon to share any dashboard.


But, I was not getting this icon. Refer to below image. Do you see any icon here? Of course not.


Now, let's hunt for solution. It is clear that if you don't see any functionality in any application then it can be due to two reasons

1. An application bug
2. Less or no access permissions

First point is ruled out because, I cross checked with admin account that I could see this option with admin account.

If not first then it is second point which is causing the issue here. Now, question is how to resolve it.

Here you go...

Question 1 - What is your user account name?

In my case, it is holadmin. I used VMware HOL to demonstrate this time ;)

Question 2 - This user is part of which "User groups"?

To check this, navigate as shown in below image

So, now we know that my user account that is "holadmin" is part of two groups that are "Everyone" and "HOL Admin Group".

Question 3 - What roles are assigned to these groups?

I would check in "HOL Admin Group" because "Everyone" group is default one. Want to know, how to check? follow below steps-


Now, as you can see in above pic that group "HOL Admin Group" has "Administrator" role. What Next?

Question 4 - This "Administrator" roles has which permissions?

Let's explore it further with below image.


To see all permission, you can check the permission column and to modify, click on EDIT.

Hope you can see that check boxes, Share(internal) and Share (public) are unchecked. This is the reason, I couldn't see the share icon because share dashboard permission was not granted to this group.

Share(Internal) - dashboard can be shared between the users who are part of this group only.

Share(Public) - dashboard can be shared between all the users who are part of any group in vrops.

Check both option (or one, as per your wish) and click on update.

That's it. After update, you will see the share icon as I shown above in very first image.

This was just an example but if you understand it then you will be able to resolve many such issues.

Hope it was useful!

Any thoughts, any comments are welcome!







,

vROPS | how to check manually uploaded PAK file status

 Hello Guys,

VMware has this process mentioned but that is confusing to me and I had to deal it differently to make it happen. So, I thought to write down on my blog. I hope it will help many. Its again amazing that this process is not included in vROPS API guide.

Basically, after manually upload the PAK file or you can say that after pre-staging of PAK file, we need to ensure now that we did it right so we have to use this API way to check that.

Step 1 : Connect vROPS in API as per instructions in vROPS API guide
Step 2 : Again, as per instructions in api guide, use vrops auth token and basic authentication now.
Step 3 : This step, I didn't find anywhere in API guide. You need to modify the command as shown below with GET operation and then hit the search button.
https://myvrops.com:443/casa/upgrade/cluster/pak/vRealizeOperationsManagerEnterprise-81116522883/status

vRealizeOperationsManagerEnterprise-81116522883 > This is the PAK file ID of PAK file which I uploaded manually and want to check if distributed across the cluster or not.
Once I hit the go button, it give me below output.

=======Start here

{

    "cluster_pak_install_status": "CANDIDATE",

   "slices": [

        {

            "slice_address": "172.25.10.11", >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node 1

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED">>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": "upgrade.pak.warning",

                "orchestrator_action": "UNKNOWN",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.25.10.12",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node 2

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> it is again Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": "upgrade.pak.warning",

                "orchestrator_action": "UNKNOWN",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

           },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.25.2.238",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node3

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": null,

                "orchestrator_action": "UNKNOWN",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.17.1.238",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node4

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": "upgrade.pak.warning",

                "orchestrator_action": "NO_ACTION",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.17.1.239",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node5

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": "upgrade.pak.warning",

                "orchestrator_action": "NO_ACTION",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.25.3.238",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node 6

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": "upgrade.pak.warning",

                "orchestrator_action": "UNKNOWN",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.17.1.237",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node 7

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "NOT_DISTRIBUTED>>>>>>>>>>>>>>>Here is the difference but still OK

                "pak_install_status": "INITIAL",

                "pak_distribution_progress": "6962",

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": "upgrade.pak.warning",

                "orchestrator_action": "NO_ACTION",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.17.1.248",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Host8

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": "upgrade.pak.warning",

                "orchestrator_action": "NO_ACTION",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.25.3.237",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node 9

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": null,

                "orchestrator_action": "UNKNOWN",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        },

        {

            "slice_address": "172.25.1.244",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Node 10

            "http_code": 200,

            "document": {

                "pak_id": "vRealizeOperationsManagerEnterprise-81116522883",

                "pak_state": "DISTRIBUTED",>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Okay

                "pak_install_status": "CANDIDATE",

                "pak_distribution_progress": null,

                "current_action": null,

                "node_unchanged": true,

                "failed_details": null,

                "warning_details": null,

                "orchestrator_action": "UNKNOWN",

                "pre_upgrade_validation_results_available": false,

                "log_links": []

            },

            "content_type": "application/json"

        }

    ],

    "cluster_data": {

        "cluster_action_failed": false,

        "cluster_action_failed_time": null,

        "cluster_action": "NO_ACTION"

    }

}

 =======End here

Please ignore any spelling or formatting slip-up due to lack of time but technical stuff is never compromised. Feel free to write me back in case of any confusion.