Internet Problem

Didn't see any comments on Process Explorer, but it's simple, and where I've found problems in the past. https://technet.microsoft.com/en-us/sysinternals/bb896653.aspx

By connecting wireless computers directly to the router with cables, and then running process explorer, any problems with programs should show up.
When I do this, no problems. so when I disconnect one computer and the second (wireless) computer has problems connecting, I know it's a problem with the DHCP.
What is DHCP and How DHCP Works? (DHCP Fundamentals Explained)

Next the DNS... note the comment about theDNS cache:
HTG Explains: What is DNS?

Frankly, I don't understand all of this, but in my case, am guessing that it's because of my home network setup. wireless router with four wired connections.
Computer 1 Wired
Computer 2 Wired
Computer 3 Wireless
Roku 1 Wired
Roku 2 Wireless
Remote computer weather station Wireless

In addition to this I connect with tablets...

Now, one more thing... I use Plex, which is constantly updating, as are the Rokus.

I suspect that as different connections come and go, that the connection searches change... trying and discarding the cache(s)

Once the computers are connected, the Mbs speeds are ususally even... wired, at 30Mbs, and wireless at about 15 to 20 Mbs.

If I have not used a computer for a few days, I can expect MSIE and Windows updates, which usually take three to 5 minutes to complete. I have used "Notify Me" for other updates on all programs.
Am not suggesting solutions, as most of this is still confusing to me. Am certainly not a geek.

..................................................................................
One thing to add. By Googling "resolving host problem" you'll come up with more than a million hits. The first several sites, cover Google Chrome, but some of the later sites explain other solutions. A few months ago, I had such a problem with this, that I ended up wiping one laptop clean and installing XP... the only system I had rights to. This solved the problem, but I still have no idea what caused it in the first place.

Have come to the conclusion:
"There are some things we are not given to know."
 
Last edited:
Sorry, but I don't buy it.

It would be fairly simple for a computer to measure the time between when you click your mouse and when the requested action actually occurs. It could determine "excessive" when that interval increases by X% over when the computer was first put into service.

Why should one have to look at all the programs running individually and do an internet search to decide which ones are critical to the computer's operation and which are just wasting computing power? Isn't that information all readily available?

And couldn't the computer easily tell you if the problem was internal or if it was waiting for additional data from the internet? Can't it measure download speeds and display that information?
You've presented many interesting questions. They have probably been rattling around in smarter minds than mine for 50 years or more.

One attempt is Windows Experience Index. Yesterday I updated that report on an i3 computer that was purchased within the last two years. It gave me several numbers, and each required reading from several links presented. To be honest with you, it was of no help in determining that the computer was infected by virus and malware due to the behavior of the user over several months. I had to boot the computer and use several tools to get a better idea of what was going on. Then I used several tools to fix the problems. Theoretically, the installed av should have been monitoring things, but you know how users get.
:cool:

Another attempt are various troubleshooters included in the OSs.
 
I FIRE'd as an engineer to escape the endless troubleshooting, updating, and tail chasing of 'puters.


When problems like this do show up at home - it makes me grind my teeth - and that is not a good thing! :D
 
Have come to the conclusion:
"There are some things we are not given to know."
For anyone reading your description, it is a very good example of how complex things have become. You could have a "what's wrong" button on your computer, but how would it diagnose or even know about the other devices?

Even an experienced troubleshooter would have difficulty, and I promise it would take an hour or two just to survey the equipment, and come up with a reasonable description of the network, and where possible problems lie.

The great thing about computers (for the fixers) is that the same behavior on two systems could be caused by more than one detail. Users believe there is one magic setting, but when you look at the range of configuration settings, and just take a WAG at the number of possible interactions, the number is staggering.

So you rely on best practices, known good settings, and always make sure you can back out the change just made.

I organize around the chaos. I can't control or know all of the computer universe. But I can rapidly eliminate devices from the problem, and build up and out from known good.

Problem statement is #1 step. What exactly is the problem? From that it is possible to dig a little deeper to find out if it is just one problem (rare) or several problems that have built up over time.
 
You've presented many interesting questions. They have probably been rattling around in smarter minds than mine for 50 years or more.........
I don't even know enough about computers to be dangerous, but I've learned a few things about human nature. It seems like geeks are so close to the details that they can't see the forest for the trees. All these individual processes are going on at once, but no one and nothing seems to be overall in charge of the user experience.
 
Take your idea to thousands of people that have worked to make this job easier to do. Write it up and sent it to IBM, Microsoft, Oracle et al. They spend billions each year attempting to make problems easier to detect. I'm sure they would appreciate the help.;)

On bigger iron there are better toolsets. They are expensive and designed for systems running many workloads. Even with advanced toolsets it relies on folks that understand the internals of many disparate technologys.

Look at logical partitioning and the ability to add resources on the fly. If a program is slow, caused by looping, does adding more cpu fix the problem? It's it's slow, waiting on a semaphore what does that tell someone about a corrective action. If its slow, waiting on a distributed transaction to commit an LUW, what then? Kill the transaction, or partially commit some changes, or just roll the LUW back hoping there is a user watching a UI for a response. In at least two cases the data are in indeterminate states.

This simple solution quickly becomes much more complex. Then throw in simple things such as the observer effect:

http://en.m.wikipedia.org/wiki/Observer_effect_(physics)

The problem grows, of course going across the internet and all its millions of computing devices gets complicated very fast.

I admire your determination to make it easier. In the old days you had a mainframe, controllers, a network that was privite. Problem solving expecially performance issues has become so much more difficult over the last 30 years.
 
I don't even know enough about computers to be dangerous, but I've learned a few things about human nature. It seems like geeks are so close to the details that they can't see the forest for the trees. All these individual processes are going on at once, but no one and nothing seems to be overall in charge of the user experience.
When you think about it, who's in charge of anything? Seems chaos (entropy?) is a natural part of our universe. :)
 
Sorry, but I don't buy it.

It would be fairly simple for a computer to measure the time between when you click your mouse and when the requested action actually occurs. It could determine "excessive" when that interval increases by X% over when the computer was first put into service.

Why should one have to look at all the programs running individually and do an internet search to decide which ones are critical to the computer's operation and which are just wasting computing power? Isn't that information all readily available?

And couldn't the computer easily tell you if the problem was internal or if it was waiting for additional data from the internet? Can't it measure download speeds and display that information?

I'll echo others and say that this seems like it should be do-able, but the complexity of our systems makes it pretty difficult. But I think they could be doing a better job.

Like others, I get frustrated when something isn't working right on one of our systems. But when I look at it in a different light, all these configuration options, all the various settings, apps, interactions with a keyboard, mouse, wi-fi, display, touchpad, bluetooth, etc - it's amazing the thing can even boot at all!

Consider a controlled system - something like the engine control computer in your car, or an ATM, or your micro-wave. These systems rarely, rarely ever 'error'. And one big reason is they don't allow any changes to the system. It's tightly controlled. Much harder to analyze how a system should be working when that system can by configured millions of different ways by the user in millions of different environments.

Apple made some changes recently, where they report on apps using 'significant energy' under the battery menu-bar icon. Not sure of the details, but that sounds like a step in the right direction. A while back, DD mentioned that the fan on her MacBook-pro was always on, and the battery didn't last very long between charges. I looked at the Activity Monitor, and found that some printing process was using 99% CPU constantly. It was hung up, trying to print to a printer that wasn't available (a network printer back at her school). I killed the process and all was well. Yes, it seems that the system should be able to report on things like that, at least tell the user something is using lots of CPU, and they can click 'OK' if that makes sense for what they are doing. Like when I convert a ripped CD music file from lossless FLAC to compressed mp3 or ogg for use in a portable player - that process can fully utilize all the cores in my system at near 100% for as long as it takes to do the conversion - which is exactly what you want, work it!

-ERD50
 
Here's how I understand things now:

I had gotten 100 gigaBytes of free OneDrive space via Bing (note there's a new offer to get 100 gigs for two years by agreeing to receive spam). It's a great resource for offsite backups, or so I thought.

I was using it for my weekly 6 Gig Docs/Photos backup, done automagically at 3 AM every Friday. I'd wake up on Friday, and see the backed up file in the cloud (on the OneDrive).

What I didn't realize was that the computer was placing each backup in a folder in my document directory, then gradually syncing it with the file on the OneDrive. The 100 gigs in the cloud was filled up a few months ago, although there was no warning to that effect. If I'd opened the OneDrive app it would have told me, and I think the icons in File Explorer had little exclamation marks on them, but I had no idea it was in trouble.

So in the background, OneDrive is desperately trying to sync seventeen six-gig files to the cloud and failing. It's freaking out, but I didn't know that. THAT was the source of my problem.

My new backup strategy:

I now leave a 64 gig thumbdrive in the computer. Every Friday at 3 AM the computer will automagically save a backup to that drive (with a great, free app called Backup Maker). It will cycle through seven backups.

Periodically I will copy one of those files to OneDrive (or better, to Google MyDrive which doesn't try to sync).

In addition, in my writing app, Scrivener, I can click on button to backup a copy of my book. For the book I'm finishing (90,000 words) the backup file is about 20 megabytes in size. When I do the backup, I see the little OneDrive icon in my system tray change to show some sync arrows, and it takes about a minute to finish syncing.

This is what happens with ping while it's syncing:

C:\Windows\system32>ping 8.8.8.8 /t

Pinging 8.8.8.8 with 32 bytes of data:
Request timed out.
Reply from 8.8.8.8: bytes=32 time=1313ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Request timed out.
Reply from 8.8.8.8: bytes=32 time=1044ms TTL=58
Reply from 8.8.8.8: bytes=32 time=85ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Request timed out.
Reply from 8.8.8.8: bytes=32 time=3410ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Request timed out.
Request timed out.
Reply from 8.8.8.8: bytes=32 time=17ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=17ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=17ms TTL=58
Reply from 8.8.8.8: bytes=32 time=17ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=17ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58
Reply from 8.8.8.8: bytes=32 time=18ms TTL=58

Ping statistics for 8.8.8.8:
Packets: Sent = 26, Received = 21, Lost = 5 (19% lo
Approximate round trip times in milli-seconds:
Minimum = 17ms, Maximum = 3410ms, Average = 293ms

For my new book that I've started, the backup file is tiny, and only takes seconds to sync.
 
Last edited:
When you think about it, who's in charge of anything? Seems chaos (entropy?) is a natural part of our universe. :)
I think this is why they have engineers running nuclear power plants instead of poets. :LOL:
 
Al, you are more patient than I am. I would have ditched OneDrive in disgust by now. I normally back up manually to an external hard drive, with a secondary partial backup of crucial files to a thumb drive. I should probably look into automating it some day.

Thanks for the thread - - it made me think about backups, and to investigate how large my full backups are these days (which I haven't checked in eons). I discovered that my most extensive backups are only 12 GB, which I can easily fit on a thumb drive instead of just putting a partial backup there. :facepalm:
 
Last edited:
Here's an interesting question:

Would my bandwidth-hogging problem affect the internet speeds of my neighbors?
 
Here's an interesting question:

Would my bandwidth-hogging problem affect the internet speeds of my neighbors?

As I understand it, cable capacity is shared at some points, so I think the answer could be 'yes'.

I'm also revisiting my backup strategy. I like using rsync, as you know what's going on since you set the commands, no wondering what some background program is doing.

rsync has an option the throttle the bandwidth it uses. If you used that, you could probably reduce the affect on you and your neighbors other systems.

I'm not that great with the terminal command line stuff, I started out using Grsync, which adds a GUI front-end to rsync. But every once in a while, I'd accidentally click something as I moved around the screen, and I could never be sure if I ended up selecting/de-selecting something. So it was pretty easy to figure which boxes were related to which options, so I wrote out the command lines, and just copy/paste from a text file I keep for this.

A typical command to do an incremental backup from my drive to an external USB is like this:

sudo rsync --progress -r -t -p -o -g -l -H --partial -s <your source dir> <your dest dir>


-r (recursive); -t (Preserve Time); -p (Preserve Permisions); -o (Preserve OWNER); -g (Preserve Group);
--progress ( show progress during transfer );
-l (Copy Symlinks as Symlinks) ; -H (Preserve hard links)
--partial (keep partially transferred files);
-s (Protect remote args? no space-splitting?) If you need to transfer a filename that contains whitespace, you can either specify the --protect-args (-s) option,...


The thing I'm looking into now, is to set a limit for how far back it looks. Once I have full backups to my USB drives (done every few weeks), I could do an incremental to a flash drive if I set it for just the most recent few months. That could run everyday.

-ERD50
 
...(snip)...
Thanks for the thread - - it made me think about backups, and to investigate how large my full backups are these days (which I haven't checked in eons). I discovered that my most extensive backups are only 12 GB, which I can easily fit on a thumb drive instead of just putting a partial backup there. :facepalm:
My (probably older) thumb drives seem to take a long time to read/write. If you're putting 12GB on yours, roughly how long does it take to write? Also is it USB 2 or USB 3 and recently purchased?
 
My (probably older) thumb drives seem to take a long time to read/write. If you're putting 12GB on yours, roughly how long does it take to write? Also is it USB 2 or USB 3 and recently purchased?

If you are doing an incremental backup, it will likely be pretty small, and take little time. Unless you generate 12GB between incremental backups, but that's a lot.

-ERD50
 
If you are doing an incremental backup, it will likely be pretty small, and take little time. Unless you generate 12GB between incremental backups, but that's a lot.

-ERD50
That is assuming one is using backup software to the thumb drive. I do that for an external hard drive weekly.

My thumb drive is a redundant storage, and so I just back up some key folders once a month. In that case, I just wipe away the old and copy the new fully. No compression so I can go get a file easily if I need a back copy.
 
These is a way to throttle bandwidth for a process like onedrive but I rejected it as too messy.

Sent from my Nexus 7 using Early Retirement Forum mobile app
 
Incremental backups are great for space, but when you're stressed and searching for a file you deleted, it can add stress. You have to find just the right backup. Maybe newer backup software handles that well.

Sent from my Nexus 7 using Early Retirement Forum mobile app
 
My (probably older) thumb drives seem to take a long time to read/write. If you're putting 12GB on yours, roughly how long does it take to write? Also is it USB 2 or USB 3 and recently purchased?

(Like you, normally I just copy a couple of crucial files to the thumb drive right now; I do the full backup to my portable external hard drive.) My present thumb drive is only 8 GB, but I noticed several 16 GB USB3.0 thumb drives on Amazon in the $7-$11 range.

So, the answer is, right now I don't know. I never do incremental backups; I just copy everything over. It takes a few minutes to copy over to my portable external hard drive, so it would probably take longer to copy over onto a thumb drive.

However, I can always think of things to do while I'm doing something like that, such as play video games on my Nintendo 3DSXL portable console, or do a jigsaw puzzle on my iPad, or cook dinner, and so on. Tasks like this are made for a multi-tasking mind.
 
Last edited:
Al,


Your cable modem has an upstream limit (as well as a downstream limit). And your traffic is also shaped at the CMTS as shown in that video.

So it is unlikely that your hogging ways are affecting the neighbors. The system design limits you, as it was expecting you to be a hog.
:angel:
 
Incremental backups are great for space, but when you're stressed and searching for a file you deleted, it can add stress. You have to find just the right backup. Maybe newer backup software handles that well.

Sent from my Nexus 7 using Early Retirement Forum mobile app

Not the way I do 'em. The way Rsync (and many other systems) works is, your first backup copies everything. Subsequent runs just add whatever has been added/changed. Directory structure and everything 'looks' just like the source.

That's the only way I'll do it. I don't want some backup file that has to be processed to recreate the original. I want to go in and look for a file and see that it is there. I routinely test my backups by doing just that.

-ERD50
 
Curiosity question:
Is there anyone who does NOT use "Everything"?
From the standpoint of memory, it's a program I could not live without.

The second program that is invaluable to me is SlimCleaner... not just for junk files, but for almost all speed factors and extraneous files.

We all know what we know... most will never follow suggestions... even when they're free. :(
 
Curiosity question:
Is there anyone who does NOT use "Everything"?
From the standpoint of memory, it's a program I could not live without.

The second program that is invaluable to me is SlimCleaner... not just for junk files, but for almost all speed factors and extraneous files.

We all know what we know... most will never follow suggestions... even when they're free. :(

"Everything"? Looks like an indexed search tool? That's been built into Mac OSX for years now ("Searchlight" I think it's called), no additions required. I use RECOLL on Ubuntu/Linux. Indexed search is not built into Windows?

"SlimCleaner" - looks like defragging? Not sure that's a big deal with a modern OS.

-ERD50
 
Not the way I do 'em. The way Rsync (and many other systems) works is, your first backup copies everything. Subsequent runs just add whatever has been added/changed. Directory structure and everything 'looks' just like the source.

That's the only way I'll do it. I don't want some backup file that has to be processed to recreate the original. I want to go in and look for a file and see that it is there. I routinely test my backups by doing just that.

-ERD50

Here's the issue I have, and maybe rsync resolves it:

On the morning of June 1, I delete an important file.:facepalm:

With weekly full backups I just go back to the backup for the previous Friday, and restore my file.

With weekly incremental backups, the file may not be on last Friday's backup, because maybe it hasn't been modified for three months. So now I need to work back through the incremental backups and find one made within a week of the last modification. I can't skip any, because then I might not get the most recent version.

But perhaps rsync handles that situation, and keeps track of all the mods, and gives you a list of the files as if you had a full backup. Does it?
 
Here's the issue I have, and maybe rsync resolves it:

On the morning of June 1, I delete an important file.:facepalm:

With weekly full backups I just go back to the backup for the previous Friday, and restore my file.

With weekly incremental backups, the file may not be on last Friday's backup, because maybe it hasn't been modified for three months. So now I need to work back through the incremental backups and find one made within a week of the last modification. I can't skip any, because then I might not get the most recent version.

But perhaps rsync handles that situation, and keeps track of all the mods, and gives you a list of the files as if you had a full backup. Does it?

You are waaaaay over thinking this. When it comes to backups, I keep things simple, simple, simple.

All these files are in ONE PLACE (times as many separate backups you want to make). Like I said, "Directory structure and everything 'looks' just like the source."

So everything goes in on the first run, additions are made with each incremental run. Your deleted file would still be there, it doesn't delete anything (unless you specify that). There is no 'going back to different backups'. It's simple, real simple. IOW, it is always a 'full backup'.

rsync is open source, very stable, and supported on just about everything.

-ERD50
 
Back
Top Bottom