I am attempting to create a solution in which I remotely tell a system to capture an image of the C drive and store it to D partition. Then if the C becomes corrupt I can have someone manually load the image from D back over C using a imagex disk. Has anyone else been able to accomplish this or something similar? Personality Backup and Restore, along with capturing the image and pulling it back to the server will not suffice as we have technicians out in the field who will need to access the image(backup) on demand.
Hello, we are starting use Radia to deploy Microsoft patches, we have a highly remote environment and always connected through WWAN. My question is what is best practices for patching highly mobile devices, and if you use a VPN application how do you split the connection so it automatically downloads patches using WIFI once the computer connects once in house.
I’m migrating a CSDB to 9.2 and in the migration guides best practices, for the audit domain, it states to migrate the individual services. Based on this, I’m assuming it’s an export for each zservice in the audit domain to produce the instance, class and resource file. In another part of the migrate guide, it states to export all resources for the CSDB, again I’m assuming a single resource file for all resources in the CSDB. The question is, if the audit domain is by service which produces a xpi, xpr, xpc won’t this over write the schema changes when I import it.
The June acquisition is only bringing down MS16-070. I have an urgent case open with Accelerite to investigate the problem.
When I look in the radstate.log file (ZTIMEQ OBJECT) on a couple of computers the software connect daily is set to the following ZSCHDEF = [DAILY(19991010,00:00:00)] so the client doesn’t connect unless I force a software connection. Is there a way to reset the daily software connect time without having to reinstall the Radia 9.1 client?
Can anyone confirm or deny that there is/was a 255 path limit in the Batch Publisher? I think I remember this from long ago. It used to be a Windows problem. Now Windows addressed it, however, I think utilities still needed to be updated to work with the newer APIs. We are probably using an older version of it, so maybe it was addressed in a new release. We can likely work around it, or call it into support. Just looking to see if anyone else knows the answer off the top of their head.
Details from the error I am pretty sure because the path is over 255 for this file.
20160121 09:06:12 Error: Target <Q:/_AUTOPUBLISH/MDT_W8X64_ENT_WIN/V22.214.171.124/deployprod/Deploy/Operating Systems/Windows 8.1 Ent (x64) 2014.11.21/sources/sxs/amd64_netfx-system.directoryservices.protocols_b03f5f7f11d50a3a_6.3.9600.16384_none_3cdb1f0252010eb1/system.directoryservices.protocols.dll> does not exist
20160121 09:06:12 Error: could not read "Q:/_AUTOPUBLISH/MDT_W8X64_ENT_WIN/V126.96.36.199/deployprod/Deploy/Operating Systems/Windows 8.1 Ent (x64) 2014.11.21/sources/sxs/amd64_netfx-system.directoryservices.protocols_b03f5f7f11d50a3a_6.3.9600.16384_none_3cdb1f0252010eb1/system.directoryservices.protocols.dll": no such file or directory
Kudos goes out to the Accelerite team for quickly addressing an issue we reported. I have been a proponent for creating a less RCS intensive Patch Manager product for quite a while. When the "Patch with Metadata Only" option came out I believed this was the answer. It was actually called "Offline Scanning" or "OPUS" (no idea what that stood for) at certain points. That model downloads all the patch data needed to do discover_patch to the device so it does not have to talk to the RCS during the scanning. I had assumed that meant it would disconnect from the RCS and not be using an Active Task. We had designed our capacity around this fact. However, after troubeshooting our infrastructure issues during our production patch rollouts the last couple of months I discovered it was not working that way. It was not actually talking to the RCS, but it was still holding an Active Task and connection to the RCS during the 3~5 minutes or so it took to do discover_patch (all locally). I am referring to the Microsoft patches BTW.
Accelerite understood the issue, addressed it, and now it is already fix for the next time you do a patch acquire. I was impressed.
Long time customers may remember that a big selling point of Radia over EDM when it was first released was that EDM held the connection to the RCS (manager) during the entire connect. With Radia is was broken up so the client would connect and disconnect multiple times as to not waste resources on the server side while the client was busy doing stuff locally. We need to make sure we maintain that model. It helps us customers keep our cost down in infrastructure. I actually have an enhancement request for disconnecting during the BDELETE for similar reasons.
I have not implemented in Prod yet (it will not be until Jan patches are released), but I will repost any interesting results in reduce capacity needs after I analyze them.
During a recent Windows 10 tablet image deploying I notice slowness and found it to be related to default power scheme in WINPE which is set to balanced by default.
heres a link to a related post by deployment guys. http://blogs.technet.com/b/deploymentguys/archive/2015/03/27/reducing-windows-deployment-time-using-power-management.aspx