Can not mount Exchange Database after use of eseutil

Today I had the challenge to get a corrupted Exchange 2010 database working again. The customer had a problem with the hard disk space of the Exchange 2010 server over the weekend (it was running out).

After fixing the problem of missing disk space, one of the Exchange databases could not be mounted, mounting failed with a message in the event log that a checksum in the database is no longer correct. Unfortunately the customer did not have a current backup (3 days old) and Circular Logging was not activated, so it was not possible to restore the log files.

To get the database up and running again I repaired the database with the following command:

Eseutil /P "G:\Exchange Database\Mailbox Database 02.edb" /Tx:\TEMPREPAIR.EDB

Using the /P option will most likely result in data loss in the database, as it will remove defective parts. The parameter /T specifies the location (x:\TEMPREPAIR.EDB) of the temporary database, which serves as a cache during the repair run. The repair run took about 6 hours for a 500 GB database and was completed successfully.

When I tried to mount the database again, it failed. The reason were the files (.log etc) which were still in the same directory as the Exchange database. I moved all files except the .edb to a subdirectory and tried again and was successful. The database could be mounted again.

Robocopy skips files with multithreading switch set to /MT:32

Some time ago I reported an error Robocopy logged when it tried to copy a directory where the primary group was not set.

In the same project we encountered another problem: If we use Robocopy with the /MT:32 switch (use multithreading with a maximum of 32 threads), the problem occurs that some files and directories are not copied. These were often zero byte files, but also standard image files such as JPEGs and entire directories. The skipped files and directories had no special features, we checked them in Explorer and also with PowerShell and Get-ACL.

Unfortunately, the skipped files and directories are also not logged, no error message appears in the log file.

If we then tried to copy one of the skipped directories with the same Robocopy call (limited to the directory), then it works. A strange behavior that looks like a bug.

After repeating the Robocopy call without the /MT switch, everything worked as expected.

We noticed the problem because we compared the source directory with the target directory after the Robocopy copy process with Beyond Compare (an excellent program).

Beyond Compare can not only synchronize directories but also directories. However, the transfer of NTFS permissions must be explicitly activated.

The source system was a NetApp system, it could have something to do with the observed behavior.

Server 2019 Preview – ADPREP fails with “Failed to verify file signature: error 0x800b0109”

Today I have some time and wanted to upgrade my HP Proliant G8 Microserver running Server 2016 to the new Server 2019 Preview (build 17623). During setup I received the following message:

"Active Directory on this domain controller does not contain 
Windows Server ADPREP /FORESTPREP updates.
See https://go.microsoft.com/fwlink/?LinkId=113955."

Okay, no problem, I thought I’d update the schema. I started a CMD shell as an administrator and changed to the support\adprep directory on the server 2019 DVD and entered the following command:

adprep /forestprep

The command failed, I received the following error message in the log file:

Verifying file signature
Failed to verify file signature: error 0x800b0109.

According to the protocol file, an attempt was made to execute the following command:

ldifde -i -f "D:\adprep\sch88.ldf" -s "shepsrv05.shep-net.de" -h -j "C:\WINDOWS\debug\adprep\logs\20180326105311" -$ "D:\adprep\schupgrade.cat"

The file sch88.ldf seemed to be fine, there was nothing noticeable here. In the file schupgrade.cat, however, the following error message appeared in the file properties on the tab “Digital Signatures” after clicking on “Details”:

"A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider"

A click on “View Certificate” showed the following message:

"This certificate cannot be verified up to a trusted certification authority."

A click on the tab “Certification Path” showed me the following:

That would be a possible explanation for my error message when using adprep. It looks like the root certificate used to issue the certificate of the CAT file is not recognized as trusted by my server because it is not installed on my local system. To fix this I added the root certificate using the following steps:

After adding the root certificate I was able to successfully execute the adprep command:

Excuting PowerShell scripts on Windows clients monitored by Icinga (Nagios etc)

For one of my customers I have set up a monitoring system based on Icinga2.
Icinga2 runs on a Ubuntu Linux host and monitors clients, servers and devices on the network from there. In order to monitor the Windows computers, the Icinga2 client has been installed on them, which can be downloaded here. The clients were then connected to the Icinga2 server and the usual parameters such as CPU, RAM, hard disk usage etc. were recorded.

My customer was very satisfied, but had another special task for me:”I would like to monitor an application running on one of our Windows servers for errors,” he said. “In the event of an error, the application writes a dump of it to a file. Can we monitor a specific directory on a Windows server so that we are notified when a new file with a specific file extension appears?”. Nothing easier than that, I thought. Using a PowerShell script should make it easy to implement.

Unfortunately, there is no predefined command in Icinga2 to start a PowerShell script on a Windows computer. I therefore created a new command via the Icinga Director. As command type I choose “Plugin Check Command”, as command name I use “windows-powershell”. The actual command is as follows:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe "& '$script_name$'"

The command uses the $script_name$ variable to pass the path to the PowerShell file, which is then executed on the Windows computer. The relative path to the powershell. exe file is the same on any Windows system. However, the drive letter could be thereotic other than “C:”. But I assume that the operating system is installed on 99.99999999% of all Windows systems with the drive letter “C:”. In such cases, the %SystemRoot% variable is usually used in Windows environments to make sure that the path always points to the “Windows” directory and the correct drive letter.

Unfortunately, this cannot be used in this case, since Icinga assumes that it is a path relative to the Icinga installation directory on the client (usually “c:\program Files\icinga2”) if a path is specified that does not begin with a drive letter.

For example, if I use the Path

"%SystemRoot%\WindowsPowerShell\V1.0\powershell.exe"

Icinga tries to find the command relative to the installation directory, which would result in the path “c:\program Files\icinga2\%SystemRoot%\WindowsPowerShell\v1.0\powershell. exe”. Therefore, I give the path absolutely.

So that the path of the PowerShell script file to be executed can be passed to our command, we define a new field called “script_name” on the tab “Fields”. Since our command doesn’t work without a script, we also specify that this is a mandatory field. The definition of our command is as follows:

object CheckCommand "windows-powershell"
{
   import "plugin-check-command"
   command = [
      "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
      ""&",
   "'$script_name$'""
   ]
}

Next we define a new command called “windows-powershell-check-error-files” of type “Plugin Check Command”. In this command we import our previously defined command “windows-powershell”:

Screenshot

As value for the variable “script_name” we give the absolute path to the PowerShell script file that will be used to check our folder. I plan to place the script file in the Icinga sbin directory later, so in my case the path is “C:\Program Files\ICINGA2\sbin\check-error-files.ps1”.

Since the script also needs to know which folder to check, we create a new argument with the name “-folderPath” and the value “$folder_path$” of the type “string”. In addition, we create the corresponding mandatory field “folder_path”. After saving the change, we can specify the script path on the first tab of the command. In my case, this is “C:\Program Files\ICINGA2\sbin\sbin\check-error-files.ps1”. The definition of the second command is then as follows:

object CheckCommand "windows-powershell-check-error-files"
{
   import "plugin-check-command"
   import "windows-powershell"
   arguments +=
   {
      "-folderPath" =
      {
         required = true
         value = "$folder_path$"
      }
   }
   vars.script_name = "C:\\Program Files\\ICINGA2\\sbin\\check-error-files.ps1"
}

Resolved (i. e. including the settings it inherits from the parent command “windows-powershell”), the command looks like this:

object CheckCommand "windows-powershell-check-folder"
{
   import "plugin-check-command"
   command = [
      "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
      ""&",
      "'$script_name$'""
   ]
   arguments +=
   {
      "-folderPath" =
      {
         required = true
         value = "$folder_path$"
      }
   }
   vars.script_name = "C:\\Program Files\\ICINGA2\\sbin\\check-folder.ps1"
}

In the next step, we use this command in a new service template. I call the service template “PowerShell Check Error Files” and import the check command “windows-powershell-check-error-files”. In addition, I define that the service has to be run on an agent (i. e. not on the Icinga server itself, there is no PowerShell available). After the configuration, the file for the service looks like this:

template Service "PowerShell Check Error Files"
{
   check_command = "windows-powershell-check-error-files"
   enable_notifications = true
   enable_active_checks = true
   enable_passive_checks = true
   enable_event_handler = true
   enable_perfdata = true
   command_endpoint = host_name
}

In the last step of the Icinga server configuration, I assign the service to a Windows computer and run the check. In this case, I use the computer on which the folder is located. It would also be possible to run the check over the network and check if the folder is present on another computer. However, the Icinga service runs in the default settings on Windows computers under the Network Service account. This account can access the local computer, but cannot access resources on other Windows computers over the network. If this is desired, the Icinga service account on the Windows machine should be changed to a default domain account.

The service is assigned to the desired computer via the Icinga Director and the section “Hosts”, in my case to the computer “Guinan”.

So that the PowerShell script also knows which folder it should check, I enter the path to the folder as argument for “folderPath”.

The configuration file looks like this:

object Service "PowerShell Check Error Files"
{
   host_name = "Guinan.StarTrek.net"
   import PowerShell Check Error Files"
   vars.folder_path = "
C:\\inetpub\\temp"
}

For the Icinga 2 Agent to be able to run the PowerShell script, it must be located on the Windows computer in the Icinga 2 Agent directory under “C:\Program Files\ICINGA2\sbin”. There I copy the following script under the name “check-error-files. ps1”:

Param(
    [string]$folderPath
)


function Exit-WithExitCode($exitCode)
{
    $host.SetShouldExit($exitcode)
    exit
}

if (!$folderPath)
{
    Write-Host "UNKNOWN: Parameter 'folderPath' is not set. Script will be aborted."
    Exit-WithExitCode 3
}

$result = $null
$exitcode = $null

# Does folder exists?
if (Test-Path $folderPath )
{
    $items = Get-ChildItem -LiteralPath $folderPath -Filter "*.err"
   
    if ($items)
    {
        if ($items.Count -gt 1)
        {
            $result = ("CRITICAL: Folder '{0}' contains '{1}' error files" -f $folderPath, $items.Count)
            $exitCode = 2
        }
        else
        {
            $result = ("WARNING: Folder '{0}' contains '1' error files" -f $folderPath)
            $exitCode = 1
        }
    }
    else
    {
        $result = ("OK: Folder '{0}' contains '0' error files" -f $folderPath)
        $exitCode = 0
    }
}
else
{
    $result = ("Folder {0} does not exist" -f $folderPath)
    $exitCode = 3
}

Write-Host $result
Exit-WithExitCode $exitCode

The script accepts the parameter “folderPath” and searches for files with the extension “. err”in this directory. If the directory does not contain a file with the extension “. err”, the script returns the error code 0.
If there is a file with the extension “. err” in this directory, the script returns the error code 1 and signals a warning.
If there is more than 1 file with the extension in the directory, the script returns the error code 2 and signals a critical state.

The PowerShell script can be extended relatively easily by adding additional parameters to the PowerShell script, e. g. to transmit the file extension to be checked for or the threshold values for warning and critical.

Important for the script is the function “Exit-WithExitCode”. The function returns the error code so that it can also be received by the Icinga2 agent. If the error code is returned directly via the PowerShell command “Exit 1”, then the Icinga2 Agent cannot receive the real value but always receives the value 0.

To check on the client which command is executed by the Icinga2 agent, debugging can be enabled. To do this, the following command is executed on the command line on the Client:

icinga2 feature enable debuglog
net stop icinga2
net start icinga2

The log file is created in the following directory:

C:\ProgramData\icinga2\var\log\log\icinga2

If everything works as desired, debugging can be disabled with the following commands:

icinga2 feature disable debuglog
net stop icinga2
net start icinga2

In the debug log, the script call looks like this:

[2018-03-04 13:50:58 +0100] notice/JsonRpcConnection: Received 'event::ExecuteCommand' message from 'Icheb.startrek.net'
[2018-03-04 13:50:58 +0100] notice/Process: Running command 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe ""&" "'C:\Program Files\ICINGA2\sbin\check-error-files.ps1'"" -folderPath C:\inetpub\temp': PID 3556
[2018-03-04 13:50:59 +0100] notice/Process: PID 3556 ('
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe ""&" "'C:\Program Files\ICINGA2\sbin\check-error-files.ps1'"" -folderPath C:\inetpub\temp') terminated with exit code 0
[2018-03-04 13:50:59 +0100] notice/ApiListener: Sending message '
event::CheckResult' to 'Icheb.startrek.net'

Robocopy error 1338 (0x0000053A) during file migration from NetApp to Windows Server 2016

I am working on a project to migrate CIFS shares on a NetApp to Windows Server 2016 file servers. To migrate the folder and file structure I use Robocopy with the following command line:

robocopy \\netapp\vol_home$ D:\vol_home /MIR /SEC /SECFIX /R:1 /W:1 /MT:32 /LOG:D:\Migration\Robocopy\vol_home_output.log /NFL /NDL /NP

The migration of the data has so far worked without problems. Today, however, Robocopy displays an error while trying to copy a directory:

2018/02/25 13:47:46 FEHLER 1338 (0x0000053A) NTFS-Sicherheit wird in Zielverzeichnis kopiert D:\vol_home\test\
Die Struktur der Sicherheitsbeschreibung ist unzulässig.

Or in english:

ERROR 1338 (0x0000053A) Copying NTFS Security to Destination Directory D:\vol_home\test\
The security descriptor structure is invalid.

I found the following information about this error online (KB2459083):

“The error is usually caused by the CIFS file server returning invalid security information for a file. For example, if the CIFS file server returns a NULL Security ID (SID) for a file’s Owner, or a file’s Primary Group, when Robocopy tries to copy this information to the destination file, Windows will return error 87 “The parameter is incorrect” or error 1338 “The security descriptor is invalid”. This is by design – file security information in Windows is expected to contain both Owner and Primary Group SIDs.”

The reason for the problem could be that there is no owner and/or primary group set on the folder’s security description. I can change the owner via the properties in Windows Explorer, so I first tried to set another owner here. The new owner was set, but this did not solve the problem. The same error message still appeared.

Unfortunately, only the owner can be displayed and set with the Windows Explorer, the Primary group is not displayed. Fortunately, Windows PowerShell is able to display the ACL of a folder including its owner and primary group. The Get-ACL cmdlet is used for this purpose:

Get-ACL \\netapp\vol_home$\test

As you can see here, the value for the group is missing:

For comparison, here is a screenshot with the group set correctly:

To set the primary group for the folder, I have written the following PowerShell script:

$folderPath = "\\netapp2a\vol_home$\test"
$primaryGroup = "VORDEFINIERT\Administratoren"
$folder = Get-Item $folderPath
Write-Host ("ACL for folder '{0}' before change:" -f $folderPath)
$folderACL = Get-Acl $folderPath
$folderACL | fl
$newPrimaryGroupACL = New-Object System.Security.AccessControl.DirectorySecurity
$primaryGroup = New-Object System.Security.Principal.NTAccount($primaryGroup)
# Sets the primary group for the security descriptor associated with this ObjectSecurity object
# https://msdn.microsoft.com/en-us/library/system.security.accesscontrol.directorysecurity(v=vs.110).aspx
$newPrimaryGroupACL.SetGroup($primaryGroup)
$folder.SetAccessControl($newPrimaryGroupACL)

Subsequently, a new call to the Get-Acl cmdlet returned the following result:

Robocopy then copied the directory without any problems.

Openhabian with Raspberry Pi – Part 1: Installation of openHABian on the Raspberry Pi 3

The holiday season in december is traditionally the time in year when I try to finish the projects I started over the year :-).
This december I want to finish the installation and configuration of my Rasberry Pi 3 together with openHAB 2 and openHABian.

OpenHAB is a great software that is vendor independent and allows you to control a lot of different smart home devices. It acts as the brain of your smart home environment. To ease the installation and use of openHAB the team behind openHAB has created a distribution that includes openHAB and all required components to start.

This series of blog posts describes the steps to install and configure a openHAB 2 installation and use it to control the following devices:

– Philips Hue bulbs
– Philips Hue light stripe
– Xiaomi Vacuum 1
– LG TV
– Vu+ DUO2
– Marantz 1603
– XBOX One
– …

And of course I want to control those devices with my Amazon Echos aka Alexa and voice commands.

The series is divided into the following parts:

Part 1: Installation of openHABian on the Raspberry Pi 3
Part 2: Configuration of openHAB
Part 3: Adding Hue devices to openHAB
Part 4: Adding Logitech Harmony Hub and the living room devices to openHAB
Part 5: Adding Xiaomi vacuum cleaner type 1 to openHAB
Part 6: Adding voice control with Alexa

This is part 1 of the series. In this part I will describe the installation and network configuration of openhabian on my Raspberry Pi3.

Part 1: Installation of openHABian on the Raspberry Pi 3

I bought a Raspberry Pi 3 starter kit on Amazon:

https://www.amazon.de/Vilros-Raspberry-Pi-Complete-Kit-Enthalt/dp/B01DC6MKAQ

Included in this kit is the Raspberry Pi 3, a 16 GB Micro SD Card, a Case for the Raspberry, a power supply, a heat sink and a HDMI cable to connect the Pi to a monitor or TV. I assembled the parts and got a copy of the openHABian Image from the following URL:

https://github.com/openhab/openhabian/releases

Use the link to the newest release notes and images:

openHAB download

At the time of writing the current release was openHABian 1.4. Download the Raspberry Pi Image from the download site and save it to your computer:

openHAB download 2

The Image file is compressed so you have to expand it before you can use it. I use 7-Zip to expand the Image file. The file should have the file extension .img after you have expanded it.

Write the image to the SD card

I used Windows 10, a UGREEN USB card reader and the Etcher application to write the image file to the SD card. Insert the SD card into the card reader, download and start Etcher and select the openHABian image file you have downloaded and expanded before. Use the SD card as destination for the image and start the write process with the “Flash!” button.

Frist boot of Raspberry Pi

Insert the SD card into the Raspberry Pi 3 and connect the Pi to the power supply, the network and (if you want) to the monitor and keyboard. My Pi looks like this when all cables are plugged in (you also see my connected mouse even when I do not use it):

Raspberry Pi 3 with cables

The Pi should start and display boot messages. After the boot process has completed you should see the login screen:

openHABian Login

Login to the console

You can login with the openHABian default login username and password openhabian/openhabian:

openHABian after Login

If the login was successful you should see some hardware and utilization information about the Pi, a short description for the first actions and the shell prompt.

If you don’t like the default password of the openhabian user you could change it now with the following command:

passwd

This will change the password of the current user. You first need to provide
the current password (openhabian) of the user and then two times the new password. The change will take effect immediately and you can use the new password the next time you login by console or SSH.

Login without keyboard and monitor

If you have no monitor and keyboard connected you can also login to the Pi via SSH but you need to find the IP address of the Pi first. The network configuration of the Pi is set to DHCP by default so your DHCP server should contain an entry for the Pi. You could also create a DHCP reservation and restart the Pi so you make sure the Pi always gets the same IP. If you don’t like this approach have a look a the next section where I give instructions how to set the Pi to a static IP configuration.

I use Putty as SSH client under Windows. Download and start it. Enter the IP address of the Pi and click on “Open”. Putty should establish the connection and show the login prompt. Enter the same default login credentials as for the login via console, openhabian/openhabian.

Change IP configuration

If you want to set a static IP configuration for the Pi you need to edit the file /etc/dhcpcd.conf.

To do so please connect to the Pi via SSH or the console and logon as openhabian/openhabian. We need elevated rights to save changes in the file /etc/dhcpcd.conf so we have to start the text editor nano with the sudo command like this:

sudo nano /etc/dhcpcd.conf

The system asks for the password of the openhabian user again so please type it in and hit enter.

openHABian edit dhcpcd.conf

The file opens and we can insert out static IP address configuration in section ‘# Example static IP configuration”. I use the following configuration at home:

interface eth0
static ip_address=192.168.35.245/24
static routers=192.168.35.254
static domain_name_servers=192.168.35.35 192.168.35.18

The value

interface eth0

identifies the network card that should use our IP configuration.

The entry

static ip_address=

contains the IP address you want to use with the subnet mask (/24 = 255.255.255.0) behind the IP address.

The line 

static routers=

sets the default gateway to reach the other subnets and the internet.

The last line 

static domain_name_servers=

sets the DNS servers which should be used by the Pi.

If you are done with your changes press the key combination Ctrl + x to exit the text editor. If the file was changed nano asks if the file should be saved. Answer the question with “Y” and enter.

Now the file is changed but the Pi uses the old IP configuration until you reboot it or restart the network with the following command:

sudo /etc/init.d/networking restart

If you are connected via SSH and you have changed you IP configuration your session will end and you need create a new session with the new IP address.

You can test the new configuration by pinging the Google DNS server with the following command:

ping 8.8.8.8

If you get a response for the ping you know that the connection to the internet is ok. Next you could try to ping the Bing website with the following command:

ping www.bing.com

If this also works the DNS resolution is also ok. If you have trouble with one or both tests please check your network configuration and make sure your ping packets are not blocked by the firewall of your internet router.

Update the Pi

To install the latest updates for the Pi use the following commands:

sudo apt update && sudo apt upgrade

This will update the information about new software updates and the start the upgrade process. If new upgrades are found you need to confirm the installation with enter.

This completes our first part of the blog post. Please read the next part of the series where I describe the first configuration of openHAB.

Error message installing Workflow Manager and Service Bus

I needed a Workflow Manager Installation for my SharePoint 2016 farm and I used the Web Installer to do all the Installation work for me following this guide:

https://gallery.technet.microsoft.com/SharePoint-2016-Workflow-acd5ba2a

When the Installation finished I configured the Installation Manager with the wizard and got the following error message during the configuration:

System.Management.Automation.CmdletInvocationException: An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.

The reason for this error in my case was the language of the SQL Server. Mine is installed in German and that means there is no Group called BUILTIN\Administrators. Instead I have a group called VORDEFINIERT\Administratoren. So I changed the values in the wizard for the admin groups to their german name but still no luck: The wizard only accepts BUILTIN\Administrators and not VORDEFINIERT\Administratoren. So I changed my admin Groups to domain Groups like CONTOSO\Domänen-Admins and that worked and I was able to accomplish the Installation.

ResourceUnhealthyException when moving mailbox back to on premise Exchange

The last couple of days I did some test with Exchange Online and Exchange on premise hybrid environments. I connected my on premise Exchange environment with Exchange Online and moved a mailbox to the cloud (Onboarding). The move succeeded and the mailbox was reachable in the cloud. After this  I wanted to move the mailbox back to my on premise environment (Offboarding) and created a new move request. This time the move failed and I got the following error in the move log:

Error = Transient error ResourceUnhealthyException has occurred

The reason for this error is that the search index of my on premise databases was corrupt. Before Exchange starts the move of the mailbox it check if the destination mailbox is healthy. If this is not the case, because the search index is corrupt, the move will fail with error message above. I followed the instructions in the following articles to recreate the search index of my content databases:

https://practical365.com/exchange-server/fix-failed-database-content-index-exchange-2013/

https://blog.brankovucinec.com/2015/10/02/fix-the-exchange-2013-content-catalog-index/

The whole process of index recreation took some hours to complete. Afterwards I created a new move request and this time it worked and the mailbox was moved back to the Exchange on premise organization.

The response status code is ‘Unauthorized’ using Connect-PNPOnline

We use the SharePoint Patterns and Practices framework in a project to provide SharePoint Site Collections for SharePoint Online. For a few days now, when I log on to SharePoint Online with the PowerShell cmdlet Connect-PnPOnline, I have received the following error message:

Cannot contact web site 'https://customer-admin.sharepoint.com/' or the web site does not support SharePoint Online credentials. The response status code is 'Unauthorized'. The response headers are 'X-SharePointHealthScore=0, X-MS
DAVEXT_Error=917656; Access+denied.+Before+opening+files+in+this+location%2c+you+must+first+browse+to+the+web+site+and+select+the+option+to+login+automatically., SPRequestGuid=ac6c229e-80d9-4000-83e5-3a2938f84c4b, request-id=ac6c229e-80d9-4000-83e5-3a2938f84
c4b, MS-CV=niJsrNmAAECD5TopOPhMSw.0, Strict-Transport-Security=max-age=31536000, X-FRAME-OPTIONS=SAMEORIGIN, SPRequestDuration=187, SPIisLatency=1, MicrosoftSharePointTeamServices=16.0.0.6927, X-Content-Type-Options=nosniff, X-MS-InvokeApp=1; RequireReadOnly
, X-MSEdge-Ref=Ref A: 4A35F4682A1449D1967ECE00FC378207 Ref B: AMSEDGE1019 Ref C: 2017-10-13T07:23:37Z, Content-Length=0, Content-Type=text/plain; charset=utf-8, Date=Fri, 13 Oct 2017 07:23:36 GMT, P3P=CP="ALL IND DSP COR ADM CONo CUR CUSo IVAo IVDo PSA PSD T
AI TELo OUR SAMo CNT COM INT NAV ONL PHY PRE PUR UNI", X-Powered-By=ASP.NET'.

The response contains the following text:

Access denied. Before opening files in this location you must first browse to the web site and select the option to login automatically.

Although my credentials are correct, and I can easily log on to the SharePoint Online Admin page with these credentials, they are not accepted by the cmdlet. A similar problem is listed in the Github Issue List for the PNP Framework:

https://github.com/SharePoint/PnP-PowerShell/issues/1126

In this case, the solution was to replace the CSOM libraries included in the PNP framework with the most current ones provided by Nuget. I have not tested this approach.

Another solution I’ve found is to set the parameter LegacyAuthProtocolsEnabled to $True in the SharePoint Online Tenant, but that didn’t work for me:

https://blog.areflyen.no/2017/06/18/problem-with-connecting-to-sharepoint-online-in-office-365-with-powershell-sharepoint-designer-and-other-3-party-tools/

In my case, I used the -UseWebLogin: $true parameter to log on to the Connect-PNPOnline cmdlet. The cmdlet call then looks like this:

Connect-PnPOnline -Url $Url -UseWebLogin: $true

This causes the cmdlet to start a window with Internet Explorer and loads the SharePoint Online login page, asks for my data, and then logs me on. That worked for me.

 

Change Windows product key from command line and activate it

Sometimes, when I want to change the product key on Windows Server 2016, I can’t do it because nothing happens when I click the link in the activation Settings. If this happens I use slmgr.vbs from the command line to Change the key and activate it online.

Use the following command to change the product key:

slmgr.vbs /ipk ABCDE-FGHIJ-KLMNO-PQRST-UVWXY

And this command to activate it online:

slmgr.vbs /ato

The following site has a lot of examples on how to use slmgr.vbs:

https://www.howtogeek.com/245445/how-to-use-slmgr-to-change-remove-or-extend-your-windows-license/